WorldWideScience

Sample records for on-line viterbi algorithm

  1. A quantum algorithm for Viterbi decoding of classical convolutional codes

    OpenAIRE

    Grice, Jon R.; Meyer, David A.

    2014-01-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper the proposed algorithm is applied to decoding classical convolutional codes, for instance; large constraint length $Q$ and short decode frames $N$. Other applications of the classical Viterbi algorithm where $Q$ is large (e.g. speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butter...

  2. The Viterbi Algorithm expressed in Constraint Handling Rules

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2010-01-01

    The Viterbi algorithm is a classical example of a dynamic programming algorithm, in which pruning reduces the search space drastically, so that an otherwise exponential time complexity is reduced to linearity. The central steps of the algorithm, expansion and pruning, can be expressed in a concis...

  3. A quantum algorithm for Viterbi decoding of classical convolutional codes

    Science.gov (United States)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  4. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.

    1996-01-01

    The purpose of Phase 1 of the study is to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. The systems we consider are high data rate space communication systems. Also...... components. Node synchronization performed within a Viterbi decoder is discussed, and algorithms for frame synchronization are described and analyzed. We present a list of system configurations that we find potentially useful. Further, the high level architecture of units that contain frame synchronization...... and various other functions needed in a complete system is presented. Two such units are described, one for placement before the Viterbi decoder and another for placement after the decoder. The high level architectures of three possible implementations of Viterbi decoders are described: The first...

  5. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.

    1998-01-01

    The study has been divided into two phases. The purpose of Phase 1 of the study was to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. After selection of which specific...... potentially useful.Algorithms for frame synchronization are described and analyzed. Further, the high level architecture of units that contain frame synchronization and various other functions needed in a complete system is presented. Two such units are described, one for placement before the Viterbi decoder...... towards a realization in an FPGA.Node synchronization performed within a Viterbi decoder is discussed, and the high level architectures of three possible implementations of Viterbi decoders are described: The first implementation uses a number of commercially available decoders while the the two others...

  6. Convolutional Encoder and Viterbi Decoder Using SOPC For Variable Constraint Length

    DEFF Research Database (Denmark)

    Kulkarni, Anuradha; Dnyaneshwar, Mantri; Prasad, Neeli R.

    2013-01-01

    Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have...... detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better...

  7. VLSI Architecture for Configurable and Low-Complexity Design of Hard-Decision Viterbi Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2016-06-01

    Full Text Available Convolutional encoding and data decoding are fundamental processes in convolutional error correction. One of the most popular error correction methods in decoding is the Viterbi algorithm. It is extensively implemented in many digital communication applications. Its VLSI design challenges are about area, speed, power, complexity and configurability. In this research, we specifically propose a VLSI architecture for a configurable and low-complexity design of a hard-decision Viterbi decoding algorithm. The configurable and low-complexity design is achieved by designing a generic VLSI architecture, optimizing each processing element (PE at the logical operation level and designing a conditional adapter. The proposed design can be configured for any predefined number of trace-backs, only by changing the trace-back parameter value. Its computational process only needs N + 2 clock cycles latency, with N is the number of trace-backs. Its configurability function has been proven for N = 8, N = 16, N = 32 and N = 64. Furthermore, the proposed design was synthesized and evaluated in Xilinx and Altera FPGA target boards for area consumption and speed performance.

  8. Implementation of a Tour Guide Robot System Using RFID Technology and Viterbi Algorithm-Based HMM for Speech Recognition

    Directory of Open Access Journals (Sweden)

    Neng-Sheng Pai

    2014-01-01

    Full Text Available This paper applied speech recognition and RFID technologies to develop an omni-directional mobile robot into a robot with voice control and guide introduction functions. For speech recognition, the speech signals were captured by short-time processing. The speaker first recorded the isolated words for the robot to create speech database of specific speakers. After the speech pre-processing of this speech database, the feature parameters of cepstrum and delta-cepstrum were obtained using linear predictive coefficient (LPC. Then, the Hidden Markov Model (HMM was used for model training of the speech database, and the Viterbi algorithm was used to find an optimal state sequence as the reference sample for speech recognition. The trained reference model was put into the industrial computer on the robot platform, and the user entered the isolated words to be tested. After processing by the same reference model and comparing with previous reference model, the path of the maximum total probability in various models found using the Viterbi algorithm in the recognition was the recognition result. Finally, the speech recognition and RFID systems were achieved in an actual environment to prove its feasibility and stability, and implemented into the omni-directional mobile robot.

  9. An area-efficient path memory structure for VLSI Implementation of high speed Viterbi decoders

    DEFF Research Database (Denmark)

    Paaske, Erik; Pedersen, Steen; Sparsø, Jens

    1991-01-01

    Path storage and selection methods for Viterbi decoders are investigated with special emphasis on VLSI implementations. Two well-known algorithms, the register exchange, algorithm, REA, and the trace back algorithm, TBA, are considered. The REA requires the smallest number of storage elements...

  10. A Fully Parallel VLSI-implementation of the Viterbi Decoding Algorithm

    DEFF Research Database (Denmark)

    Sparsø, Jens; Jørgensen, Henrik Nordtorp; Paaske, Erik

    1989-01-01

    In this paper we describe the implementation of a K = 7, R = 1/2 single-chip Viterbi decoder intended to operate at 10-20 Mbit/sec. We propose a general, regular and area efficient floor-plan that is also suitable for implementation of decoders for codes with different generator polynomials...

  11. Minimum decoding trellis length and truncation depth of wrap-around Viterbi algorithm for TBCC in mobile WiMAX

    Directory of Open Access Journals (Sweden)

    Liu Yu-Sun

    2011-01-01

    Full Text Available Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates over the Rayleigh channel using computer simulations. The results show that the circular decoding algorithm with an appropriately chosen truncation depth and a decoding trellis just a fraction longer than the original received code words can achieve almost the same performance as the optimal maximum likelihood decoding algorithm in mobile WiMAX. A rule of thumb for the values of the truncation depth and the trellis tail length is also proposed.

  12. Comparative investigation into Viterbi based and multiple hypothesis based track stitching

    CSIR Research Space (South Africa)

    Van der Merwe, LJ

    2016-12-01

    Full Text Available . A sequential Viterbi data association algorithm is then used to solve the trellis and associate track fragments with each other. A Kalman filter is used to determine the possible associations as well as the probabilities of the associations between...

  13. GSM Channel Equalization Algorithm - Modern DSP Coprocessor Approach

    Directory of Open Access Journals (Sweden)

    M. Drutarovsky

    1999-12-01

    Full Text Available The paper presents basic equations of efficient GSM Viterbi equalizer algorithm based on approximation of GMSK modulation by linear superposition of amplitude modulated pulses. This approximation allows to use Ungerboeck form of channel equalizer with significantly reduced arithmetic complexity. Proposed algorithm can be effectively implemented on the Viterbi and Filter coprocessors of new Motorola DSP56305 digital signal processor. Short overview of coprocessor features related to the proposed algorithm is included.

  14. A FAST LEXICALLY CONSTRAINED VITERBI ALGORITHM FOR ON­ LINE HANDWRITING RECOGNITIO

    NARCIS (Netherlands)

    Lifchitz, A.; Maire, F.

    2004-01-01

    Most on­line cursive handwriting recognition systems use a lexical constraint to help improve the recognition performance. Traditionally, the vocabulary lexicon is stored in a trie (automaton whose underlying graph is a tree). In this paper, we propose a solution based on a more compact data

  15. Un calcul de Viterbi pour un Modèle de Markov Caché Contraint

    DEFF Research Database (Denmark)

    Petit, Matthieu; Christiansen, Henning

    2009-01-01

    A hidden Markov model (HMM) is a statistical model in which the system being modeled is assumed to be a Markov process with hidden states. This model has been widely used in speech recognition and biological sequence analysis. Viterbi algorithm has been proposed to compute the most probable value....... Several constraint techniques are used to reduce the search of the most probable value of hidden states of a constrained HMM. An implementation based on PRISM, a logic programming language for statistical modeling, is presented....

  16. An algorithm for on-line price discrimination

    NARCIS (Netherlands)

    D.D.B. van Bragt; D.J.A. Somefun (Koye); E. Kutschinski; J.A. La Poutré (Han)

    2002-01-01

    textabstractThe combination of on-line dynamic pricing with price discrimination can be very beneficial for firms operating on the Internet. We therefore develop an on-line dynamic pricing algorithm that can adjust the price schedule for a good or service on behalf of a firm. This algorithm (a

  17. A novel line segment detection algorithm based on graph search

    Science.gov (United States)

    Zhao, Hong-dan; Liu, Guo-ying; Song, Xu

    2018-02-01

    To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).

  18. FPGA Realization of Memory 10 Viterbi Decoder

    DEFF Research Database (Denmark)

    Paaske, Erik; Bach, Thomas Bo; Andersen, Jakob Dahl

    1997-01-01

    sequence mode when feedback from the Reed-Solomon decoder is available. The Viterbi decoder is realized using two Altera FLEX 10K50 FPGA's. The overall operating speed is 30 kbit/s, and since up to three iterations are performed for each frame and only one decoder is used, the operating speed...

  19. An area-efficient topology for VLSI implementation of Viterbi decoders and other shuffle-exchange type structures

    DEFF Research Database (Denmark)

    Sparsø, Jens; Jørgensen, Henrik Nordtorp; Paaske, Erik

    1991-01-01

    A topology for single-chip implementation of computing structures based on shuffle-exchange (SE)-type interconnection networks is presented. The topology is suited for structures with a small number of processing elements (i.e. 32-128) whose area cannot be neglected compared to the area required....... The topology has been used in a VLSI implementation of the add-compare-select (ACS) module of a fully parallel K=7, R=1/2 Viterbi decoder. Both the floor-planning issues and some of the important algorithm and circuit-level aspects of this design are discussed. The chip has been designed and fabricated in a 2....... The interconnection network occupies 32% of the area.>...

  20. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Science.gov (United States)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  1. A Novel Assembly Line Balancing Method Based on PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2014-01-01

    Full Text Available Assembly line is widely used in manufacturing system. Assembly line balancing problem is a crucial question during design and management of assembly lines since it directly affects the productivity of the whole manufacturing system. The model of assembly line balancing problem is put forward and a general optimization method is proposed. The key data on assembly line balancing problem is confirmed, and the precedence relations diagram is described. A double objective optimization model based on takt time and smoothness index is built, and balance optimization scheme based on PSO algorithm is proposed. Through the simulation experiments of examples, the feasibility and validity of the assembly line balancing method based on PSO algorithm is proved.

  2. A Novel Assembly Line Scheduling Algorithm Based on CE-PSO

    Directory of Open Access Journals (Sweden)

    Xiaomei Hu

    2015-01-01

    Full Text Available With the widespread application of assembly line in enterprises, assembly line scheduling is an important problem in the production since it directly affects the productivity of the whole manufacturing system. The mathematical model of assembly line scheduling problem is put forward and key data are confirmed. A double objective optimization model based on equipment utilization and delivery time loss is built, and optimization solution strategy is described. Based on the idea of solution strategy, assembly line scheduling algorithm based on CE-PSO is proposed to overcome the shortcomings of the standard PSO. Through the simulation experiments of two examples, the validity of the assembly line scheduling algorithm based on CE-PSO is proved.

  3. On-Line Algorithms and Reverse Mathematics

    Science.gov (United States)

    Harris, Seth

    In this thesis, we classify the reverse-mathematical strength of sequential problems. If we are given a problem P of the form ∀X(alpha(X) → ∃Zbeta(X,Z)) then the corresponding sequential problem, SeqP, asserts the existence of infinitely many solutions to P: ∀X(∀nalpha(Xn) → ∃Z∀nbeta(X n,Zn)). P is typically provable in RCA0 if all objects involved are finite. SeqP, however, is only guaranteed to be provable in ACA0. In this thesis we exactly characterize which sequential problems are equivalent to RCA0, WKL0, or ACA0.. We say that a problem P is solvable by an on-line algorithm if P can be solved according to a two-player game, played by Alice and Bob, in which Bob has a winning strategy. Bob wins the game if Alice's sequence of plays 〈a0, ..., ak〉 and Bob's sequence of responses 〈 b0, ..., bk〉 constitute a solution to P. Formally, an on-line algorithm A is a function that inputs an admissible sequence of plays 〈a 0, b0, ..., aj〉 and outputs a new play bj for Bob. (This differs from the typical definition of "algorithm", though quite often a concrete set of instructions can be easily deduced from A.). We show that SeqP is provable in RCA0 precisely when P is solvable by an on-line algorithm. Schmerl proved this result specifically for the graph coloring problem; we generalize Schmerl's result to any problem that is on-line solvable. To prove our separation, we introduce a principle called Predictk(r) that is equivalent to -WKL0 for standard k, r.. We show that WKL0 is sufficient to prove SeqP precisely when P has a solvable closed kernel. This means that a solution exists, and each initial segment of this solution is a solution to the corresponding initial segment of the problem. (Certain bounding conditions are necessary as well.) If no such solution exists, then SeqP is equivalent to ACA0 over RCA 0 + ISigma02; RCA0 alone suffices if only sequences of standard length are considered. We use different techniques from Schmerl to prove

  4. Space communication system for compressed data with a concatenated Reed-Solomon-Viterbi coding channel

    Science.gov (United States)

    Rice, R. F.; Hilbert, E. E. (Inventor)

    1976-01-01

    A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.

  5. On-line reconstruction algorithms for the CBM and ALICE experiments

    International Nuclear Information System (INIS)

    Gorbunov, Sergey

    2013-01-01

    This thesis presents various algorithms which have been developed for on-line event reconstruction in the CBM experiment at GSI, Darmstadt and the ALICE experiment at CERN, Geneve. Despite the fact that the experiments are different - CBM is a fixed target experiment with forward geometry, while ALICE has a typical collider geometry - they share common aspects when reconstruction is concerned. The thesis describes: - general modifications to the Kalman filter method, which allows one to accelerate, to improve, and to simplify existing fit algorithms; - developed algorithms for track fit in CBM and ALICE experiment, including a new method for track extrapolation in non-homogeneous magnetic field. - developed algorithms for primary and secondary vertex fit in the both experiments. In particular, a new method of reconstruction of decayed particles is presented. - developed parallel algorithm for the on-line tracking in the CBM experiment. - developed parallel algorithm for the on-line tracking in High Level Trigger of the ALICE experiment. - the realisation of the track finders on modern hardware, such as SIMD CPU registers and GPU accelerators. All the presented methods have been developed by or with the direct participation of the author.

  6. Verification test for on-line diagnosis algorithm based on noise analysis

    International Nuclear Information System (INIS)

    Tamaoki, T.; Naito, N.; Tsunoda, T.; Sato, M.; Kameda, A.

    1980-01-01

    An on-line diagnosis algorithm was developed and its verification test was performed using a minicomputer. This algorithm identifies the plant state by analyzing various system noise patterns, such as power spectral densities, coherence functions etc., in three procedure steps. Each obtained noise pattern is examined by using the distances from its reference patterns prepared for various plant states. Then, the plant state is identified by synthesizing each result with an evaluation weight. This weight is determined automatically from the reference noise patterns prior to on-line diagnosis. The test was performed with 50 MW (th) Steam Generator noise data recorded under various controller parameter values. The algorithm performance was evaluated based on a newly devised index. The results obtained with one kind of weight showed the algorithm efficiency under the proper selection of noise patterns. Results for another kind of weight showed the robustness of the algorithm to this selection. (orig.)

  7. Line-breaking algorithm enhancement in inverse typesetting paradigma

    Directory of Open Access Journals (Sweden)

    Jan Přichystal

    2007-01-01

    Full Text Available High quality text preparing using computer desktop publishing systems usually uses line-breaking algorithm which cannot make provision for line heights and typeset paragraph accurately when composition width, page break, line index or other object appears. This article deals with enhancing of line-breaking algorithm based on optimum-fit algorithm. This algorithm is enhanced with calculation of immediate typesetting width and thus solves problem of forced change. Line-breaking algorithm enhancement causes expansion potentialities of high-quality typesetting in cases that have not been yet covered with present typesetting systems.

  8. Lining seam elimination algorithm and surface crack detection in concrete tunnel lining

    Science.gov (United States)

    Qu, Zhong; Bai, Ling; An, Shi-Quan; Ju, Fang-Rong; Liu, Ling

    2016-11-01

    Due to the particularity of the surface of concrete tunnel lining and the diversity of detection environments such as uneven illumination, smudges, localized rock falls, water leakage, and the inherent seams of the lining structure, existing crack detection algorithms cannot detect real cracks accurately. This paper proposed an algorithm that combines lining seam elimination with the improved percolation detection algorithm based on grid cell analysis for surface crack detection in concrete tunnel lining. First, check the characteristics of pixels within the overlapping grid to remove the background noise and generate the percolation seed map (PSM). Second, cracks are detected based on the PSM by the accelerated percolation algorithm so that the fracture unit areas can be scanned and connected. Finally, the real surface cracks in concrete tunnel lining can be obtained by removing the lining seam and performing percolation denoising. Experimental results show that the proposed algorithm can accurately, quickly, and effectively detect the real surface cracks. Furthermore, it can fill the gap in the existing concrete tunnel lining surface crack detection by removing the lining seam.

  9. Implementations of PI-line based FBP and BPF algorithms on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Le [Tsinghua Univ., Beijing (China). Dept. of Engineering Physics; Xing, Yuxiang [Tsinghua Univ., Beijing (China). Dept. of Engineering Physics; Ministry of Education, Beijing (China). Key Lab. of Particle and Radiation Imaging

    2011-07-01

    Exact reconstruction is under the spotlight in cone beam CT. Katsevich put forward the first exact inversion formula for helical cone beam CT, which belongs to FBP type. Also, Pan Xiaochuan's group proposed another PI-line based exact reconstruction algorithm of BPF type. These two exact reconstruction algorithms and their derivative forms have been widely studied. In this paper, we present a different way of selecting PI-line segments appropriate for both Katsevich's FBP and Pan Xiaochuan's BPF algorithms. As 3D reconstruction contributes to massive computations and takes long time, people have made efforts to speed up the algorithms with the help of multi-core CPUs and GPGPU (General Purpose Graphics Processing Unit). In this paper, we also presents implementations for these two algorithms on GPGPU using an innovative way of selecting PI-line segments. Acceleration techniques and implementations are addressed in detail. The methods are tested on the Shepp-Logan phantom. Compared with our CPU's implementations, the accelerated algorithms on GPGPU are tens to hundreds times faster. (orig.)

  10. Design and Implementation of Viterbi Decoder Using VHDL

    Science.gov (United States)

    Thakur, Akash; Chattopadhyay, Manju K.

    2018-03-01

    A digital design conversion of Viterbi decoder for ½ rate convolutional encoder with constraint length k = 3 is presented in this paper. The design is coded with the help of VHDL, simulated and synthesized using XILINX ISE 14.7. Synthesis results show a maximum frequency of operation for the design is 100.725 MHz. The requirement of memory is less as compared to conventional method.

  11. An ultrafast line-by-line algorithm for calculating spectral transmittance and radiance

    International Nuclear Information System (INIS)

    Tan, X.

    2013-01-01

    An ultrafast line-by-line algorithm for calculating spectral transmittance and radiance of gases is presented. The algorithm is based on fast convolution of the Voigt line profile using Fourier transform and a binning technique. The algorithm breaks a radiative transfer calculation into two steps: a one-time pre-computation step in which a set of pressure independent coefficients are computed using the spectral line information; a normal calculation step in which the Fourier transform coefficients of the optical depth are calculated using the line of sight information and the coefficients pre-computed in the first step, the optical depth is then calculated using an inverse Fourier transform and the spectral transmittance and radiance are calculated. The algorithm is significantly faster than line-by-line algorithms that do not employ special speedup techniques by a factor of 10 3 –10 6 . A case study of the 2.7 μm band of H 2 O vapor is presented. -- Highlights: •An ultrafast line-by-line model based on FFT and a binning technique is presented. •Computationally expensive calculations are factored out into a pre-computation step. •It is 10 3 –10 8 times faster than LBL algorithms that do not employ speedup techniques. •Good agreement with experimental data for the 2.7 μm band of H 2 O

  12. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    Science.gov (United States)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  13. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    Science.gov (United States)

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  14. Algorithms for a parallel implementation of Hidden Markov Models with a small state space

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Sand, Andreas

    2011-01-01

    Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces...

  15. An optimal algorithm for preemptive on-line scheduling

    NARCIS (Netherlands)

    Chen, B.; Vliet, van A.; Woeginger, G.J.

    1995-01-01

    We investigate the problem of on-line scheduling jobs on m identical parallel machines where preemption is allowed. The goal is to minimize the makespan. We derive an approximation algorithm with worst-case guarantee mm/(mm - (m - 1)m) for every m 2, which increasingly tends to e/(e - 1) ˜ 1.58 as m

  16. Impact of different disassembly line balancing algorithms on the performance of dynamic kanban system for disassembly line

    Science.gov (United States)

    Kizilkaya, Elif A.; Gupta, Surendra M.

    2005-11-01

    In this paper, we compare the impact of different disassembly line balancing (DLB) algorithms on the performance of our recently introduced Dynamic Kanban System for Disassembly Line (DKSDL) to accommodate the vagaries of uncertainties associated with disassembly and remanufacturing processing. We consider a case study to illustrate the impact of various DLB algorithms on the DKSDL. The approach to the solution, scenario settings, results and the discussions of the results are included.

  17. An on-line modified least-mean-square algorithm for training neurofuzzy controllers.

    Science.gov (United States)

    Tan, Woei Wan

    2007-04-01

    The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.

  18. Phase Grouping Line Extraction Algorithm Using Overlapped Partition

    Directory of Open Access Journals (Sweden)

    WANG Jingxue

    2015-07-01

    Full Text Available Aiming at solving the problem of fracture at the discontinuities area and the challenges of line fitting in each partition, an innovative line extraction algorithm is proposed based on phase grouping using overlapped partition. The proposed algorithm adopted dual partition steps, which will generate overlapped eight partitions. Between the two steps, the middle axis in the first step coincides with the border lines in the other step. Firstly, the connected edge points that share the same phase gradients are merged into the line candidates, and fitted into line segments. Then to remedy the break lines at the border areas, the break segments in the second partition steps are refitted. The proposed algorithm is robust and does not need any parameter tuning. Experiments with various datasets have confirmed that the method is not only capable of handling the linear features, but also powerful enough in handling the curve features.

  19. Document localization algorithms based on feature points and straight lines

    Science.gov (United States)

    Skoryukina, Natalya; Shemiakina, Julia; Arlazarov, Vladimir L.; Faradjev, Igor

    2018-04-01

    The important part of the system of a planar rectangular object analysis is the localization: the estimation of projective transform from template image of an object to its photograph. The system also includes such subsystems as the selection and recognition of text fields, the usage of contexts etc. In this paper three localization algorithms are described. All algorithms use feature points and two of them also analyze near-horizontal and near- vertical lines on the photograph. The algorithms and their combinations are tested on a dataset of real document photographs. Also the method of localization quality estimation is proposed that allows configuring the localization subsystem independently of the other subsystems quality.

  20. An Improved Seeding Algorithm of Magnetic Flux Lines Based on Data in 3D Space

    Directory of Open Access Journals (Sweden)

    Jia Zhong

    2015-05-01

    Full Text Available This paper will propose an approach to increase the accuracy and efficiency of seeding algorithms of magnetic flux lines in magnetic field visualization. To obtain accurate and reliable visualization results, the density of the magnetic flux lines should map the magnetic induction intensity, and seed points should determine the density of the magnetic flux lines. However, the traditional seeding algorithm, which is a statistical algorithm based on data, will produce errors when computing magnetic flux through subdivision of the plane. To achieve higher accuracy, more subdivisions should be made, which will reduce efficiency. This paper analyzes the errors made when the traditional seeding algorithm is used and gives an improved algorithm. It then validates the accuracy and efficiency of the improved algorithm by comparing the results of the two algorithms with results from the equivalent magnetic flux algorithm.

  1. Parallel field line and stream line tracing algorithms for space physics applications

    Science.gov (United States)

    Toth, G.; de Zeeuw, D.; Monostori, G.

    2004-05-01

    Field line and stream line tracing is required in various space physics applications, such as the coupling of the global magnetosphere and inner magnetosphere models, the coupling of the solar energetic particle and heliosphere models, or the modeling of comets, where the multispecies chemical equations are solved along stream lines of a steady state solution obtained with single fluid MHD model. Tracing a vector field is an inherently serial process, which is difficult to parallelize. This is especially true when the data corresponding to the vector field is distributed over a large number of processors. We designed algorithms for the various applications, which scale well to a large number of processors. In the first algorithm the computational domain is divided into blocks. Each block is on a single processor. The algorithm folows the vector field inside the blocks, and calculates a mapping of the block surfaces. The blocks communicate the values at the coinciding surfaces, and the results are interpolated. Finally all block surfaces are defined and values inside the blocks are obtained. In the second algorithm all processors start integrating along the vector field inside the accessible volume. When the field line leaves the local subdomain, the position and other information is stored in a buffer. Periodically the processors exchange the buffers, and continue integration of the field lines until they reach a boundary. At that point the results are sent back to the originating processor. Efficiency is achieved by a careful phasing of computation and communication. In the third algorithm the results of a steady state simulation are stored on a hard drive. The vector field is contained in blocks. All processors read in all the grid and vector field data and the stream lines are integrated in parallel. If a stream line enters a block, which has already been integrated, the results can be interpolated. By a clever ordering of the blocks the execution speed can be

  2. A Novel Entropy-Based Decoding Algorithm for a Generalized High-Order Discrete Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Jason Chin-Tiong Chan

    2018-01-01

    Full Text Available The optimal state sequence of a generalized High-Order Hidden Markov Model (HHMM is tracked from a given observational sequence using the classical Viterbi algorithm. This classical algorithm is based on maximum likelihood criterion. We introduce an entropy-based Viterbi algorithm for tracking the optimal state sequence of a HHMM. The entropy of a state sequence is a useful quantity, providing a measure of the uncertainty of a HHMM. There will be no uncertainty if there is only one possible optimal state sequence for HHMM. This entropy-based decoding algorithm can be formulated in an extended or a reduction approach. We extend the entropy-based algorithm for computing the optimal state sequence that was developed from a first-order to a generalized HHMM with a single observational sequence. This extended algorithm performs the computation exponentially with respect to the order of HMM. The computational complexity of this extended algorithm is due to the growth of the model parameters. We introduce an efficient entropy-based decoding algorithm that used reduction approach, namely, entropy-based order-transformation forward algorithm (EOTFA to compute the optimal state sequence of any generalized HHMM. This EOTFA algorithm involves a transformation of a generalized high-order HMM into an equivalent first-order HMM and an entropy-based decoding algorithm is developed based on the equivalent first-order HMM. This algorithm performs the computation based on the observational sequence and it requires OTN~2 calculations, where N~ is the number of states in an equivalent first-order model and T is the length of observational sequence.

  3. A Framework for the Comparative Assessment of Neuronal Spike Sorting Algorithms towards More Accurate Off-Line and On-Line Microelectrode Arrays Data Analysis.

    Science.gov (United States)

    Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra

    2016-01-01

    Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis.

  4. The selection and implementation of hidden line algorithms

    International Nuclear Information System (INIS)

    Schneider, A.

    1983-06-01

    One of the most challenging problems in the field of computer graphics is the elimination of hidden lines in images of nontransparent bodies. In the real world the nontransparent material hinders the light ray coming from hidden regions to the observer. In the computer based image formation process there is no automatic visibility regulation of this kind. So many lines are created which result in a poor quality of the spacial representation. Therefore a three-dimensional representation on the screen is only meaningfull if the hidden lines are eliminated. For this process many algorithms have been developed in the past. A common feature of these codes is the large amount of computer time needed. In the first generation of algorithms, which are commonly used today, the bodies are modeled by plane polygons. More recently, however, also algorithms are in use, which are able to treat curved surfaces without discretisation by plane surfaces. In this paper the first group of algorithms is reviewed, and the most important codes are described. The experience obtained during the implementation of two algorithms is presented. (orig.) [de

  5. The Treeterbi and Parallel Treeterbi algorithms: efficient, optimal decoding for ordinary, generalized and pair HMMs

    DEFF Research Database (Denmark)

    Keibler, Evan; Arumugam, Manimozhiyan; Brent, Michael R

    2007-01-01

    MOTIVATION: Hidden Markov models (HMMs) and generalized HMMs been successfully applied to many problems, but the standard Viterbi algorithm for computing the most probable interpretation of an input sequence (known as decoding) requires memory proportional to the length of the sequence, which can...... be prohibitive. Existing approaches to reducing memory usage either sacrifice optimality or trade increased running time for reduced memory. RESULTS: We developed two novel decoding algorithms, Treeterbi and Parallel Treeterbi, and implemented them in the TWINSCAN/N-SCAN gene-prediction system. The worst case...... asymptotic space and time are the same as for standard Viterbi, but in practice, Treeterbi optimally decodes arbitrarily long sequences with generalized HMMs in bounded memory without increasing running time. Parallel Treeterbi uses the same ideas to split optimal decoding across processors, dividing latency...

  6. On a Hopping-Points SVD and Hough Transform-Based Line Detection Algorithm for Robot Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Abhijeet Ravankar

    2016-05-01

    Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.

  7. Ecodriver. D23.2: Simulation and analysis document for on-line vehicle algorithms

    NARCIS (Netherlands)

    Seewald, P.; Orfila, O.; Saintpierre, G.

    2014-01-01

    This deliverable reports on the simulations and analysis of the on-line vehicle algorithms as well as the ecoDriver Android application. The simulation and field test results give an impression of how the algorithms will perform in the real world trials in SP3. Thus, it is possible to apply

  8. Analytical Investigations on Carrier Phase Recovery in Dispersion-Unmanaged n-PSK Coherent Optical Communication Systems

    Directory of Open Access Journals (Sweden)

    Tianhua Xu

    2016-09-01

    Full Text Available Using coherent optical detection and digital signal processing, laser phase noise and equalization enhanced phase noise can be effectively mitigated using the feed-forward and feed-back carrier phase recovery approaches. In this paper, theoretical analyses of feed-back and feed-forward carrier phase recovery methods have been carried out in the long-haul high-speed n-level phase shift keying (n-PSK optical fiber communication systems, involving a one-tap normalized least-mean-square (LMS algorithm, a block-wise average algorithm, and a Viterbi-Viterbi algorithm. The analytical expressions for evaluating the estimated carrier phase and for predicting the bit-error-rate (BER performance (such as the BER floors have been presented and discussed in the n-PSK coherent optical transmission systems by considering both the laser phase noise and the equalization enhanced phase noise. The results indicate that the Viterbi-Viterbi carrier phase recovery algorithm outperforms the one-tap normalized LMS and the block-wise average algorithms for small phase noise variance (or effective phase noise variance, while the one-tap normalized LMS algorithm shows a better performance than the other two algorithms for large phase noise variance (or effective phase noise variance. In addition, the one-tap normalized LMS algorithm is more sensitive to the level of modulation formats.

  9. Reliable Line Matching Algorithm for Stereo Images with Topological Relationship

    Directory of Open Access Journals (Sweden)

    WANG Jingxue

    2017-11-01

    Full Text Available Because of the lack of relationships between matching line and adjacent lines in the process of individual line matching, and the weak reliability of the individual line descriptor facing on discontinue texture, this paper presents a reliable line matching algorithm for stereo images with topological relationship. The algorithm firstly generates grouped line pairs from lines extracted from the reference image and searching image according to the basic topological relationships such as distance and angle between the lines. Then it takes the grouped line pairs as matching primitives, and matches these grouped line pairs by using epipolar constraint, homography constraint, quadrant constraint and gray correlation constraint of irregular triangle in order. And finally, it resolves the corresponding line pairs into two pairs of corresponding individual lines, and obtains one to one matching results after the post-processing of integrating, fitting, and checking. This paper adopts digital aerial images and close-range images with typical texture features to deal with the parameter analysis and line matching, and the experiment results demonstrate that the proposed algorithm in this paper can obtain reliable line matching results.

  10. An Algorithm to Compress Line-transition Data for Radiative-transfer Calculations

    Science.gov (United States)

    Cubillos, Patricio E.

    2017-11-01

    Molecular line-transition lists are an essential ingredient for radiative-transfer calculations. With recent databases now surpassing the billion-line mark, handling them has become computationally prohibitive, due to both the required processing power and memory. Here I present a temperature-dependent algorithm to separate strong from weak line transitions, reformatting the large majority of the weaker lines into a cross-section data file, and retaining the detailed line-by-line information of the fewer strong lines. For any given molecule over the 0.3-30 μm range, this algorithm reduces the number of lines to a few million, enabling faster radiative-transfer computations without a significant loss of information. The final compression rate depends on how densely populated the spectrum is. I validate this algorithm by comparing Exomol’s HCN extinction-coefficient spectra between the complete (65 million line transitions) and compressed (7.7 million) line lists. Over the 0.6-33 μm range, the average difference between extinction-coefficient values is less than 1%. A Python/C implementation of this algorithm is open-source and available at https://github.com/pcubillos/repack. So far, this code handles the Exomol and HITRAN line-transition format.

  11. On-line Certification for All: The PINVOX Algorithm

    Directory of Open Access Journals (Sweden)

    E Canessa

    2012-09-01

    Full Text Available A protoype algorithm: PINVOX (“Personal Identification Number by Voice" for on-line certification is introduced to guarantee that scholars have followed, i.e., listened and watched, a complete recorded lecture with the option of earning a certificate or diploma of completion after remotely attending courses. It is based on the injection of unique, randomly selected and pre-recorded integer numbers (or single letters or words within the audio trace of a video stream at places where silence is automatically detected. The certificate of completion or “virtual attendance” is generated on-the-fly after the successful identification of the embedded PINVOX code by a video viewer student.

  12. Solving radiative transfer with line overlaps using Gauss-Seidel algorithms

    Science.gov (United States)

    Daniel, F.; Cernicharo, J.

    2008-09-01

    Context: The improvement in observational facilities requires refining the modelling of the geometrical structures of astrophysical objects. Nevertheless, for complex problems such as line overlap in molecules showing hyperfine structure, a detailed analysis still requires a large amount of computing time and thus, misinterpretation cannot be dismissed due to an undersampling of the whole space of parameters. Aims: We extend the discussion of the implementation of the Gauss-Seidel algorithm in spherical geometry and include the case of hyperfine line overlap. Methods: We first review the basics of the short characteristics method that is used to solve the radiative transfer equations. Details are given on the determination of the Lambda operator in spherical geometry. The Gauss-Seidel algorithm is then described and, by analogy to the plan-parallel case, we see how to introduce it in spherical geometry. Doing so requires some approximations in order to keep the algorithm competitive. Finally, line overlap effects are included. Results: The convergence speed of the algorithm is compared to the usual Jacobi iterative schemes. The gain in the number of iterations is typically factors of 2 and 4 for the two implementations made of the Gauss-Seidel algorithm. This is obtained despite the introduction of approximations in the algorithm. A comparison of results obtained with and without line overlaps for N2H^+, HCN, and HNC shows that the J=3-2 line intensities are significantly underestimated in models where line overlap is neglected.

  13. ALFA: an automated line fitting algorithm

    Science.gov (United States)

    Wesson, R.

    2016-03-01

    I present the automated line fitting algorithm, ALFA, a new code which can fit emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. In contrast to traditional emission line fitting methods which require the identification of spectral features suspected to be emission lines, ALFA instead uses a list of lines which are expected to be present to construct a synthetic spectrum. The parameters used to construct the synthetic spectrum are optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. I show that the results are in excellent agreement with those measured manually for a number of spectra. Where discrepancies exist, the manually measured fluxes are found to be less accurate than those returned by ALFA. Together with the code NEAT, ALFA provides a powerful way to rapidly extract physical information from observations, an increasingly vital function in the era of highly multiplexed spectroscopy. The two codes can deliver a reliable and comprehensive analysis of very large data sets in a few hours with little or no user interaction.

  14. Ecodriver. D23.1: Report on test scenarios for val-idation of on-line vehicle algorithms

    NARCIS (Netherlands)

    Seewald, P.; Ivens, T.W.T.; Spronkmans, S.

    2014-01-01

    This deliverable provides a description of test scenarios that will be used for validation of WP22’s on-line vehicle algorithms. These algorithms consist of the two modules VE³ (Vehicle Energy and Environment Estimator) and RSG (Reference Signal Genera-tor) and will be tested using the

  15. An Asynchronous Low Power and High Performance VLSI Architecture for Viterbi Decoder Implemented with Quasi Delay Insensitive Templates

    Directory of Open Access Journals (Sweden)

    T. Kalavathi Devi

    2015-01-01

    Full Text Available Convolutional codes are comprehensively used as Forward Error Correction (FEC codes in digital communication systems. For decoding of convolutional codes at the receiver end, Viterbi decoder is often used to have high priority. This decoder meets the demand of high speed and low power. At present, the design of a competent system in Very Large Scale Integration (VLSI technology requires these VLSI parameters to be finely defined. The proposed asynchronous method focuses on reducing the power consumption of Viterbi decoder for various constraint lengths using asynchronous modules. The asynchronous designs are based on commonly used Quasi Delay Insensitive (QDI templates, namely, Precharge Half Buffer (PCHB and Weak Conditioned Half Buffer (WCHB. The functionality of the proposed asynchronous design is simulated and verified using Tanner Spice (TSPICE in 0.25 µm, 65 nm, and 180 nm technologies of Taiwan Semiconductor Manufacture Company (TSMC. The simulation result illustrates that the asynchronous design techniques have 25.21% of power reduction compared to synchronous design and work at a speed of 475 MHz.

  16. Real-time minimal-bit-error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  17. Real-time minimal bit error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  18. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    Science.gov (United States)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  19. The Treeterbi and Parallel Treeterbi algorithms: efficient, optimal decoding for ordinary, generalized and pair HMMs.

    Science.gov (United States)

    Keibler, Evan; Arumugam, Manimozhiyan; Brent, Michael R

    2007-03-01

    Hidden Markov models (HMMs) and generalized HMMs been successfully applied to many problems, but the standard Viterbi algorithm for computing the most probable interpretation of an input sequence (known as decoding) requires memory proportional to the length of the sequence, which can be prohibitive. Existing approaches to reducing memory usage either sacrifice optimality or trade increased running time for reduced memory. We developed two novel decoding algorithms, Treeterbi and Parallel Treeterbi, and implemented them in the TWINSCAN/N-SCAN gene-prediction system. The worst case asymptotic space and time are the same as for standard Viterbi, but in practice, Treeterbi optimally decodes arbitrarily long sequences with generalized HMMs in bounded memory without increasing running time. Parallel Treeterbi uses the same ideas to split optimal decoding across processors, dividing latency to completion by approximately the number of available processors with constant average overhead per processor. Using these algorithms, we were able to optimally decode all human chromosomes with N-SCAN, which increased its accuracy relative to heuristic solutions. We also implemented Treeterbi for Pairagon, our pair HMM based cDNA-to-genome aligner. The TWINSCAN/N-SCAN/PAIRAGON open source software package is available from http://genes.cse.wustl.edu.

  20. Engineering of Algorithms for Hidden Markov models and Tree Distances

    DEFF Research Database (Denmark)

    Sand, Andreas

    Bioinformatics is an interdisciplinary scientific field that combines biology with mathematics, statistics and computer science in an effort to develop computational methods for handling, analyzing and learning from biological data. In the recent decades, the amount of available biological data has...... speed up all the classical algorithms for analyses and training of hidden Markov models. And I show how two particularly important algorithms, the forward algorithm and the Viterbi algorithm, can be accelerated through a reformulation of the algorithms and a somewhat more complicated parallelization...... contribution to the theoretically fastest set of algorithms presently available to compute two closely related measures of tree distance, the triplet distance and the quartet distance. And I further demonstrate that they are also the fastest algorithms in almost all cases when tested in practice....

  1. Eliminating harmonics in line to line voltage using genetic algorithm using multilevel inverter

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, R. [Excel College of Engineering and Technology, Komarapalayam (India). Electrical and Electronics Engineering; Karthikeyan, C. [K.S. Rangasamy College of Engineering, Tamil Nadu (India). Electrical and Electronics Engineering

    2017-04-15

    In this project the total harmonic distortion (THD) minimization of the multilevel inverters output voltage is discussed. The approach in reducing harmonics contents in inverters output voltage is THD elimination. The switching angles are varied with the fundamental frequency so the output THD is minimized. In three phase applications, the line voltage harmonics are of the main concern from the load point of view. Using a genetic algorithm, a THD minimization process is directly applied to the line to line voltage of the inverter. Genetic (GA) algorithm allows the determination of the optimized parameters and consequently an optimal operating point of the circuit and a wide pass band with a unity gain is obtained.

  2. Choosing channel quantization levels and viterbi decoding for space diversity reception over the additive white Guassian noise channel

    Science.gov (United States)

    Kalson, S.

    1986-01-01

    Previous work in the area of choosing channel quantization levels for a additive white Gaussian noise channel composed of one receiver-demodulator is reviewed, and how this applies to the Deep Space Network composed of several receiver-demodulators (space diversity reception) is shown. Viterbi decoding for the resulting quantized channel is discussed.

  3. The recommendation of line-balancing improvement on MCM product line 1 using genetics algorithm and moodie young at XYZ Company, Co.

    Science.gov (United States)

    Sriwana, I. K.; Marie, I. A.; Mangala, D.

    2017-12-01

    Kencana Gemilang, Co. is one electronics industry engaging in the manufacture sector. This company manufactures and assembles household electronic products, such as rice cooker, fan, iron, blender, etc. The company deals with an issue of underachievement of an established production target on MCM products line 1. This study aimed to calculate line efficiencies, delay times, and initial line smoothness indexes. The research was carried out by means of depicting a precedence diagram and gathering time data of each work element followed by examination and calculation of standard time as well as line balancing using methods of Moodie Young and Generics Algorithm. Based on results of calculation, better line balancing than the existing initial conditions, i.e. improvement in the line efficiency by 18.39%, deterioration in balanced delay by 28.39%, and deterioration of a smoothness index by 23.85% was obtained.

  4. An improved DPSO with mutation based on similarity algorithm for optimization of transmission lines loading

    International Nuclear Information System (INIS)

    Shayeghi, H.; Mahdavi, M.; Bagheri, A.

    2010-01-01

    Static transmission network expansion planning (STNEP) problem acquires a principal role in power system planning and should be evaluated carefully. Up till now, various methods have been presented to solve the STNEP problem. But only in one of them, lines adequacy rate has been considered at the end of planning horizon and the problem has been optimized by discrete particle swarm optimization (DPSO). DPSO is a new population-based intelligence algorithm and exhibits good performance on solution of the large-scale, discrete and non-linear optimization problems like STNEP. However, during the running of the algorithm, the particles become more and more similar, and cluster into the best particle in the swarm, which make the swarm premature convergence around the local solution. In order to overcome these drawbacks and considering lines adequacy rate, in this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using an improved DPSO algorithm. The proposed improved DPSO is a new conception, collectivity, which is based on similarity between the particle and the current global best particle in the swarm that can prevent the premature convergence of DPSO around the local solution. The proposed method has been tested on the Garver's network and a real transmission network in Iran, and compared with the DPSO based method for solution of the TNEP problem. The results show that the proposed improved DPSO based method by preventing the premature convergence is caused that with almost the same expansion costs, the network adequacy is increased considerably. Also, regarding the convergence curves of both methods, it can be seen that precision of the proposed algorithm for the solution of the STNEP problem is more than DPSO approach.

  5. Optimization of Proton CT Detector System and Image Reconstruction Algorithm for On-Line Proton Therapy.

    Directory of Open Access Journals (Sweden)

    Chae Young Lee

    Full Text Available The purposes of this study were to optimize a proton computed tomography system (pCT for proton range verification and to confirm the pCT image reconstruction algorithm based on projection images generated with optimized parameters. For this purpose, we developed a new pCT scanner using the Geometry and Tracking (GEANT 4.9.6 simulation toolkit. GEANT4 simulations were performed to optimize the geometric parameters representing the detector thickness and the distance between the detectors for pCT. The system consisted of four silicon strip detectors for particle tracking and a calorimeter to measure the residual energies of the individual protons. The optimized pCT system design was then adjusted to ensure that the solution to a CS-based convex optimization problem would converge to yield the desired pCT images after a reasonable number of iterative corrections. In particular, we used a total variation-based formulation that has been useful in exploiting prior knowledge about the minimal variations of proton attenuation characteristics in the human body. Examinations performed using our CS algorithm showed that high-quality pCT images could be reconstructed using sets of 72 projections within 20 iterations and without any streaks or noise, which can be caused by under-sampling and proton starvation. Moreover, the images yielded by this CS algorithm were found to be of higher quality than those obtained using other reconstruction algorithms. The optimized pCT scanner system demonstrated the potential to perform high-quality pCT during on-line image-guided proton therapy, without increasing the imaging dose, by applying our CS based proton CT reconstruction algorithm. Further, we make our optimized detector system and CS-based proton CT reconstruction algorithm potentially useful in on-line proton therapy.

  6. Noise propagation in iterative reconstruction algorithms with line searches

    International Nuclear Information System (INIS)

    Qi, Jinyi

    2003-01-01

    In this paper we analyze the propagation of noise in iterative image reconstruction algorithms. We derive theoretical expressions for the general form of preconditioned gradient algorithms with line searches. The results are applicable to a wide range of iterative reconstruction problems, such as emission tomography, transmission tomography, and image restoration. A unique contribution of this paper comparing to our previous work [1] is that the line search is explicitly modeled and we do not use the approximation that the gradient of the objective function is zero. As a result, the error in the estimate of noise at early iterations is significantly reduced

  7. Using a Quadtree Algorithm To Assess Line of Sight

    Science.gov (United States)

    Gonzalez, Joseph; Chamberlain, Robert; Tailor, Eric; Gutt, Gary

    2006-01-01

    A matched pair of computer algorithms determines whether line of sight (LOS) is obstructed by terrain. These algorithms were originally designed for use in conjunction with combat-simulation software in military training exercises, but could also be used for such commercial purposes as evaluating lines of sight for antennas or determining what can be seen from a "room with a view." The quadtree preparation algorithm operates on an array of digital elevation data and only needs to be run once for a terrain region, which can be quite large. Relatively little computation time is needed, as each elevation value is considered only one and one-third times. The LOS assessment algorithm uses that quadtree to answer LOS queries. To determine whether LOS is obstructed, a piecewise-planar (or higher-order) terrain skin is computationally draped over the digital elevation data. Adjustments are made to compensate for curvature of the Earth and for refraction of the LOS by the atmosphere. Average computing time appears to be proportional to the number of queries times the logarithm of the number of elevation data points. Accuracy is as high as is possible for the available elevation data, and symmetric results are assured. In the simulation, the LOS query program runs as a separate process, thereby making more random-access memory available for other computations.

  8. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Zoran N. Milivojevic

    2011-09-01

    Full Text Available The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  9. A fast, robust algorithm for power line interference cancellation in neural recording

    Science.gov (United States)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed algorithm features a highly robust operation, fast adaptation to

  10. Improved Seam-Line Searching Algorithm for UAV Image Mosaic with Optical Flow.

    Science.gov (United States)

    Zhang, Weilong; Guo, Bingxuan; Li, Ming; Liao, Xuan; Li, Wenzhuo

    2018-04-16

    Ghosting and seams are two major challenges in creating unmanned aerial vehicle (UAV) image mosaic. In response to these problems, this paper proposes an improved method for UAV image seam-line searching. First, an image matching algorithm is used to extract and match the features of adjacent images, so that they can be transformed into the same coordinate system. Then, the gray scale difference, the gradient minimum, and the optical flow value of pixels in adjacent image overlapped area in a neighborhood are calculated, which can be applied to creating an energy function for seam-line searching. Based on that, an improved dynamic programming algorithm is proposed to search the optimal seam-lines to complete the UAV image mosaic. This algorithm adopts a more adaptive energy aggregation and traversal strategy, which can find a more ideal splicing path for adjacent UAV images and avoid the ground objects better. The experimental results show that the proposed method can effectively solve the problems of ghosting and seams in the panoramic UAV images.

  11. Fast algorithm for spectral processing with application to on-line welding quality assurance

    Science.gov (United States)

    Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.

    2006-10-01

    A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.

  12. Cellular Genetic Algorithm with Communicating Grids for Assembly Line Balancing Problems

    Directory of Open Access Journals (Sweden)

    BRUDARU, O.

    2010-05-01

    Full Text Available This paper presents a new approach with cellular multigrid genetic algorithms for the "I"-shaped and "U"-shaped assembly line balancing problems, including parallel workstations and compatibility constraints. First, a cellular hybrid genetic algorithm that uses a single grid is described. Appropriate operators for mutation, hypermutation, and crossover and two devoration techniques are proposed for creating and maintaining groups based on similarity. This monogrid algorithm is extended for handling many populations placed on different grids. In the multigrid version, the population of each grid is organized in clusters using the positional information of the chromosomes. A similarity preserving communication protocol between the clusters placed on different grids is introduced. The experimental evaluation shows that the multigrid cellular genetic algorithm with communicating grids is better than the hybrid genetic algorithm used for building it, whereas it dominates the monogrid version in all cases. Absolute performance is evaluated using classical benchmarks. The role of certain components of the cellular algorithm is explained and the effect of some parameters is evaluated.

  13. A study of digital holographic filters generation. Phase 2: Digital data communication system, volume 1

    Science.gov (United States)

    Ingels, F. M.; Mo, C. D.

    1978-01-01

    An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.

  14. Motion Vector Estimation Using Line-Square Search Block Matching Algorithm for Video Sequences

    Directory of Open Access Journals (Sweden)

    Guo Bao-long

    2004-09-01

    Full Text Available Motion estimation and compensation techniques are widely used for video coding applications but the real-time motion estimation is not easily achieved due to its enormous computations. In this paper, a new fast motion estimation algorithm based on line search is presented, in which computation complexity is greatly reduced by using the line search strategy and a parallel search pattern. Moreover, the accurate search is achieved because the small square search pattern is used. It has a best-case scenario of only 9 search points, which is 4 search points less than the diamond search algorithm. Simulation results show that, compared with the previous techniques, the LSPS algorithm significantly reduces the computational requirements for finding motion vectors, and also produces close performance in terms of motion compensation errors.

  15. Algorithms for the on-line travelling salesman

    NARCIS (Netherlands)

    Ausiello, G.; Feuerstein, E.; Leonardi, S.; Stougie, L.; Talamo, M.

    1999-01-01

    In this paper the problem of efficiently serving a sequence of requests presented in an on-line fashion located at points of a metric space is considered. We call this problem the On-Line Travelling Salesman Problem (OLTSP). It has a variety of relevant applications in logistics and robotics. We

  16. A new algorithm for optimum voltage and reactive power control for minimizing transmission lines losses

    International Nuclear Information System (INIS)

    Ghoudjehbaklou, H.; Danai, B.

    2001-01-01

    Reactive power dispatch for voltage profile modification has been of interest to power utilities. Usually local bus voltages can be altered by changing generator voltages, reactive shunts, ULTC transformers and SVCs. Determination of optimum values for control parameters, however, is not simple for modern power system networks. Heuristic and rather intelligent algorithms have to be sought. In this paper a new algorithm is proposed that is based on a variant of a genetic algorithm combined with simulated annealing updates. In this algorithm a fuzzy multi-objective a approach is used for the fitness function of the genetic algorithm. This fuzzy multi-objective function can efficiently modify the voltage profile in order to minimize transmission lines losses, thus reducing the operating costs. The reason for such a combination is to utilize the best characteristics of each method and overcome their deficiencies. The proposed algorithm is much faster than the classical genetic algorithm and cna be easily integrated into existing power utilities software. The proposed algorithm is tested on an actual system model of 1284 buses, 799 lines, 1175 fixed and ULTC transformers, 86 generators, 181 controllable shunts and 425 loads

  17. A study on low-cost, high-accuracy, and real-time stereo vision algorithms for UAV power line inspection

    Science.gov (United States)

    Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue

    2018-04-01

    Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.

  18. Determination of edge plasma parameters by a genetic algorithm analysis of spectral line shapes

    Energy Technology Data Exchange (ETDEWEB)

    Marandet, Y.; Genesio, P.; Godbert-Mouret, L.; Koubiti, M.; Stamm, R. [Universite de Provence (PIIM), Centre de Saint-Jerome, 13 - Marseille (France); Capes, H.; Guirlet, R. [Association Euratom-CEA Cadarache, 13 - Saint-Paul-lez-Durance (France). Dept. de Recherches sur la Fusion Controlee

    2003-07-01

    Comparing an experimental and a theoretical line shape can be achieved by a genetic algorithm (GA) based on an analogy to the mechanisms of natural selection. Such an algorithm is able to deal with complex non-linear models, and can avoid local minima. We have used this optimization tool in the context of edge plasma spectroscopy, for a determination of the temperatures and fractions of the various populations of neutral deuterium emitting the D{sub {alpha}} line in 2 configurations of Tore-Supra: ergodic divertor and toroidal pumped limiter. Using the GA fit, the neutral emitters are separated into up to 4 populations which can be identified as resulting from molecular dissociation reactions, charge exchange, or reflection. In all the edge plasmas studied, a significant fraction of neutrals emit in the line wings, leading to neutrals with a temperature up to a few hundreds eV if a Gaussian line shape is assumed. This conclusion could be modified if the line wing exhibits a non Gaussian behavior.

  19. Determination of edge plasma parameters by a genetic algorithm analysis of spectral line shapes

    International Nuclear Information System (INIS)

    Marandet, Y.; Genesio, P.; Godbert-Mouret, L.; Koubiti, M.; Stamm, R.; Capes, H.; Guirlet, R.

    2003-01-01

    Comparing an experimental and a theoretical line shape can be achieved by a genetic algorithm (GA) based on an analogy to the mechanisms of natural selection. Such an algorithm is able to deal with complex non-linear models, and can avoid local minima. We have used this optimization tool in the context of edge plasma spectroscopy, for a determination of the temperatures and fractions of the various populations of neutral deuterium emitting the D α line in 2 configurations of Tore-Supra: ergodic divertor and toroidal pumped limiter. Using the GA fit, the neutral emitters are separated into up to 4 populations which can be identified as resulting from molecular dissociation reactions, charge exchange, or reflection. In all the edge plasmas studied, a significant fraction of neutrals emit in the line wings, leading to neutrals with a temperature up to a few hundreds eV if a Gaussian line shape is assumed. This conclusion could be modified if the line wing exhibits a non Gaussian behavior

  20. High-speed architecture for the decoding of trellis-coded modulation

    Science.gov (United States)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  1. Hybrid phase retrieval algorithm for solving the twin image problem in in-line digital holography

    Science.gov (United States)

    Zhao, Jie; Wang, Dayong; Zhang, Fucai; Wang, Yunxin

    2010-10-01

    For the reconstruction in the in-line digital holography, there are three terms overlapping with each other on the image plane, named the zero order term, the real image and the twin image respectively. The unwanted twin image degrades the real image seriously. A hybrid phase retrieval algorithm is presented to address this problem, which combines the advantages of two popular phase retrieval algorithms. One is the improved version of the universal iterative algorithm (UIA), called the phase flipping-based UIA (PFB-UIA). The key point of this algorithm is to flip the phase of the object iteratively. It is proved that the PFB-UIA is able to find the support of the complicated object. Another one is the Fienup algorithm, which is a kind of well-developed algorithm and uses the support of the object as the constraint among the iteration procedure. Thus, by following the Fienup algorithm immediately after the PFB-UIA, it is possible to produce the amplitude and the phase distributions of the object with high fidelity. The primary simulated results showed that the proposed algorithm is powerful for solving the twin image problem in the in-line digital holography.

  2. Factorization of J-unitary matrix polynomials on the line and a Schur algorithm for generalized Nevanlinna functions

    NARCIS (Netherlands)

    Alpay, D.; Dijksma, A.; Langer, H.

    2004-01-01

    We prove that a 2 × 2 matrix polynomial which is J-unitary on the real line can be written as a product of normalized elementary J-unitary factors and a J-unitary constant. In the second part we give an algorithm for this factorization using an analog of the Schur transformation.

  3. Design and Implementation of Convolutional Encoder and Viterbi Decoder Using FPGA.

    Directory of Open Access Journals (Sweden)

    Riham Ali Zbaid

    2018-01-01

    Full Text Available Keeping  the  fineness of data is the most significant thing in communication.There are many factors that affect the accuracy of the data when it is transmitted over the communication channel such as noise etc. to overcome these effects are encoding channels encryption.In this paper is used for one type of channel coding is convolutional codes. Convolution encoding is a Forward Error Correction (FEC method used in incessant one-way and real time communication links .It can offer a great development in the error bit rates so that small, low energy, and devices cheap transmission when used in applications such as satellites. In this paper highlight the design, simulation and implementation of convolution encoder and Viterbi decoder by using MATLAB- program (2011. SIMULINK HDL coder is used to convert MATLAB-SIMULINK models to VHDL using plates Altera Cyclone II code DE2-70. Simulation and evaluation of the implementation of the results coincided with the results of the design show the coinciding with the designed results.

  4. Implementation of on-line data reduction algorithms in the CMS Endcap Preshower Data Concentrator Cards

    CERN Document Server

    Barney, D; Kokkas, P; Manthos, N; Sidiropoulos, G; Reynaud, S; Vichoudis, P

    2007-01-01

    The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from two, three or four sensors. For the readout of the Preshower, a VME-based system, the Endcap Preshower Data Concentrator Card (ES-DCC), is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction via zero suppression and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation in the ES-DCC FPGAs. These algorithms, as implemented in the ES-DCC, result in a data-reduction factor of 20.

  5. Implementation of On-Line Data Reduction Algorithms in the CMS Endcap Preshower Data Concentrator Card

    CERN Document Server

    Barney, David; Kokkas, Panagiotis; Manthos, Nikolaos; Reynaud, Serge; Sidiropoulos, Georgios; Vichoudis, Paschalis

    2006-01-01

    The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from 2, 3 or 4 sensors. For the readout of the Preshower, a VME-based system - the Endcap Preshower Data Concentrator Card (ES-DCC) is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction (zero suppression) and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation into the ES-DCC FPGAs. The algorithms implemented into the ES-DCC resulted in a reduction factor of ~20.

  6. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    Science.gov (United States)

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  7. A dynamic programming algorithm for the buffer allocation problem in homogeneous asymptotically reliable serial production lines

    Directory of Open Access Journals (Sweden)

    Diamantidis A. C.

    2004-01-01

    Full Text Available In this study, the buffer allocation problem (BAP in homogeneous, asymptotically reliable serial production lines is considered. A known aggregation method, given by Lim, Meerkov, and Top (1990, for the performance evaluation (i.e., estimation of throughput of this type of production lines when the buffer allocation is known, is used as an evaluative method in conjunction with a newly developed dynamic programming (DP algorithm for the BAP. The proposed algorithm is applied to production lines where the number of machines is varying from four up to a hundred machines. The proposed algorithm is fast because it reduces the volume of computations by rejecting allocations that do not lead to maximization of the line's throughput. Numerical results are also given for large production lines.

  8. Enhanced Map-Matching Algorithm with a Hidden Markov Model for Mobile Phone Positioning

    Directory of Open Access Journals (Sweden)

    An Luo

    2017-10-01

    Full Text Available Numerous map-matching techniques have been developed to improve positioning, using Global Positioning System (GPS data and other sensors. However, most existing map-matching algorithms process GPS data with high sampling rates, to achieve a higher correct rate and strong universality. This paper introduces a novel map-matching algorithm based on a hidden Markov model (HMM for GPS positioning and mobile phone positioning with a low sampling rate. The HMM is a statistical model well known for providing solutions to temporal recognition applications such as text and speech recognition. In this work, the hidden Markov chain model was built to establish a map-matching process, using the geometric data, the topologies matrix of road links in road network and refined quad-tree data structure. HMM-based map-matching exploits the Viterbi algorithm to find the optimized road link sequence. The sequence consists of hidden states in the HMM model. The HMM-based map-matching algorithm is validated on a vehicle trajectory using GPS and mobile phone data. The results show a significant improvement in mobile phone positioning and high and low sampling of GPS data.

  9. Fast intersection detection algorithm for PC-based robot off-line programming

    Science.gov (United States)

    Fedrowitz, Christian H.

    1994-11-01

    This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.

  10. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  11. Discrete PSO algorithm based optimization of transmission lines loading in TNEP problem

    International Nuclear Information System (INIS)

    Shayeghi, H.; Mahdavi, M.; Bagheri, A.

    2010-01-01

    Transmission network expansion planning (TNEP) is a basic part of power system planning that determines where, when and how many new transmission lines should be added to the network. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, lines adequacy rate has not been considered at the end of planning horizon, i.e. expanded network misses adequacy after some times and needs to be expanded again. In this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using discrete particle swarm optimization (DPSO) algorithm. Expanded network will possess a maximum adequacy to provide load demand and also the transmission lines overloaded later. The proposed idea has been tested on the Garvers network and an actual transmission network of the Azerbaijan regional electric company, Iran, and the results are compared with the decimal codification genetic algorithm (DCGA) technique. The results evaluation shows that the network will possess maximum efficiency economically. Also, it is shown that precision and convergence speed of the proposed DPSO based method for the solution of the STNEP problem is superior to DCGA approach.

  12. Development of an On-Line Self-Tuning FPGA-PID-PWM Control Algorithm Design for DC-DC Buck Converter in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Ahmed Sabah Al-Araji

    2017-08-01

    Full Text Available This paper presents a new development of an on-line hybrid self-tuning control algorithm of the Field Programmable Gate Array - Proportional Integral Derivative - Pulse Width Modulation (FPGA-PID-PWM controller for DC-DC buck converter which is used in battery operation of mobile applications. The main goal in this work is to propose structure of the hybrid Bees-PSO tuning control algorithm which has a capability of quickly and precisely searching in the global regions in order to obtain optimal gain parameters for the proposed controller to generate the best voltage control action to achieve the desired performance of the Buck converter output. Matlab simulation results and Xilinx development tool Integrated Software Environment (ISE experimental work show the robustness and effectiveness of the proposed on-line hybrid Bees-PSO tuning control algorithm in terms of obtaining smooth and unsaturated state voltage control action and minimizing the tracking voltage error of the Buck converter output. Moreover, the fitness evaluation number is reduced.

  13. A Fast Inspection of Tool Electrode and Drilling Depth in EDM Drilling by Detection Line Algorithm.

    Science.gov (United States)

    Huang, Kuo-Yi

    2008-08-21

    The purpose of this study was to develop a novel measurement method using a machine vision system. Besides using image processing techniques, the proposed system employs a detection line algorithm that detects the tool electrode length and drilling depth of a workpiece accurately and effectively. Different boundaries of areas on the tool electrode are defined: a baseline between base and normal areas, a ND-line between normal and drilling areas (accumulating carbon area), and a DD-line between drilling area and dielectric fluid droplet on the electrode tip. Accordingly, image processing techniques are employed to extract a tool electrode image, and the centroid, eigenvector, and principle axis of the tool electrode are determined. The developed detection line algorithm (DLA) is then used to detect the baseline, ND-line, and DD-line along the direction of the principle axis. Finally, the tool electrode length and drilling depth of the workpiece are estimated via detected baseline, ND-line, and DD-line. Experimental results show good accuracy and efficiency in estimation of the tool electrode length and drilling depth under different conditions. Hence, this research may provide a reference for industrial application in EDM drilling measurement.

  14. Fast Simulation of 3-D Surface Flanging and Prediction of the Flanging Lines Based On One-Step Inverse Forming Algorithm

    International Nuclear Information System (INIS)

    Bao Yidong; Hu Sibo; Lang Zhikui; Hu Ping

    2005-01-01

    A fast simulation scheme for 3D curved binder flanging and blank shape prediction of sheet metal based on one-step inverse finite element method is proposed, in which the total plasticity theory and proportional loading assumption are used. The scheme can be actually used to simulate 3D flanging with complex curve binder shape, and suitable for simulating any type of flanging model by numerically determining the flanging height and flanging lines. Compared with other methods such as analytic algorithm and blank sheet-cut return method, the prominent advantage of the present scheme is that it can directly predict the location of the 3D flanging lines when simulating the flanging process. Therefore, the prediction time of flanging lines will be obviously decreased. Two typical 3D curve binder flanging including stretch and shrink characters are simulated in the same time by using the present scheme and incremental FE non-inverse algorithm based on incremental plasticity theory, which show the validity and high efficiency of the present scheme

  15. A Line-Based Adaptive-Weight Matching Algorithm Using Loopy Belief Propagation

    Directory of Open Access Journals (Sweden)

    Hui Li

    2015-01-01

    Full Text Available In traditional adaptive-weight stereo matching, the rectangular shaped support region requires excess memory consumption and time. We propose a novel line-based stereo matching algorithm for obtaining a more accurate disparity map with low computation complexity. This algorithm can be divided into two steps: disparity map initialization and disparity map refinement. In the initialization step, a new adaptive-weight model based on the linear support region is put forward for cost aggregation. In this model, the neural network is used to evaluate the spatial proximity, and the mean-shift segmentation method is used to improve the accuracy of color similarity; the Birchfield pixel dissimilarity function and the census transform are adopted to establish the dissimilarity measurement function. Then the initial disparity map is obtained by loopy belief propagation. In the refinement step, the disparity map is optimized by iterative left-right consistency checking method and segmentation voting method. The parameter values involved in this algorithm are determined with many simulation experiments to further improve the matching effect. Simulation results indicate that this new matching method performs well on standard stereo benchmarks and running time of our algorithm is remarkably lower than that of algorithm with rectangle-shaped support region.

  16. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    Science.gov (United States)

    Kodali, Anuradha

    In this thesis, we develop dynamic multiple fault diagnosis (DMFD) algorithms to diagnose faults that are sporadic and coupled. Firstly, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). Here, we implement a mixed memory Markov coupling model to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on simulated and real-world systems with coupled faults; the results show that this approach improves the correct isolation rate as compared to the formulation where independent fault states are assumed. Secondly, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. Here, the DSC problem is motivated from the viewpoint of a dynamic multiple fault diagnosis problem, but it has wide applications in operations research, for e.g., facility location problem. Thus, we also formulated the DSC problem in the context of a dynamically evolving facility location problem. Here, a facility can be opened, closed, or can be temporarily unavailable at any time for a given requirement of demand points. These activities are associated with costs or penalties, viz., phase-in or phase-out for the opening or closing of a

  17. A Line Search Multilevel Truncated Newton Algorithm for Computing the Optical Flow

    Directory of Open Access Journals (Sweden)

    Lluís Garrido

    2015-06-01

    Full Text Available We describe the implementation details and give the experimental results of three optimization algorithms for dense optical flow computation. In particular, using a line search strategy, we evaluate the performance of the unilevel truncated Newton method (LSTN, a multiresolution truncated Newton (MR/LSTN and a full multigrid truncated Newton (FMG/LSTN. We use three image sequences and four models of optical flow for performance evaluation. The FMG/LSTN algorithm is shown to lead to better optical flow estimation with less computational work than both the LSTN and MR/LSTN algorithms.

  18. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    Science.gov (United States)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  19. Evaluation of HIV testing algorithms in Ethiopia: the role of the tie-breaker algorithm and weakly reacting test lines in contributing to a high rate of false positive HIV diagnoses.

    Science.gov (United States)

    Shanks, Leslie; Siddiqui, M Ruby; Kliescikova, Jarmila; Pearce, Neil; Ariti, Cono; Muluneh, Libsework; Pirou, Erwan; Ritmeijer, Koert; Masiga, Johnson; Abebe, Almaz

    2015-02-03

    In Ethiopia a tiebreaker algorithm using 3 rapid diagnostic tests (RDTs) in series is used to diagnose HIV. Discordant results between the first 2 RDTs are resolved by a third 'tiebreaker' RDT. Médecins Sans Frontières uses an alternate serial algorithm of 2 RDTs followed by a confirmation test for all double positive RDT results. The primary objective was to compare the performance of the tiebreaker algorithm with a serial algorithm, and to evaluate the addition of a confirmation test to both algorithms. A secondary objective looked at the positive predictive value (PPV) of weakly reactive test lines. The study was conducted in two HIV testing sites in Ethiopia. Study participants were recruited sequentially until 200 positive samples were reached. Each sample was re-tested in the laboratory on the 3 RDTs and on a simple to use confirmation test, the Orgenics Immunocomb Combfirm® (OIC). The gold standard test was the Western Blot, with indeterminate results resolved by PCR testing. 2620 subjects were included with a HIV prevalence of 7.7%. Each of the 3 RDTs had an individual specificity of at least 99%. The serial algorithm with 2 RDTs had a single false positive result (1 out of 204) to give a PPV of 99.5% (95% CI 97.3%-100%). The tiebreaker algorithm resulted in 16 false positive results (PPV 92.7%, 95% CI: 88.4%-95.8%). Adding the OIC confirmation test to either algorithm eliminated the false positives. All the false positives had at least one weakly reactive test line in the algorithm. The PPV of weakly reacting RDTs was significantly lower than those with strongly positive test lines. The risk of false positive HIV diagnosis in a tiebreaker algorithm is significant. We recommend abandoning the tie-breaker algorithm in favour of WHO recommended serial or parallel algorithms, interpreting weakly reactive test lines as indeterminate results requiring further testing except in the setting of blood transfusion, and most importantly, adding a confirmation test

  20. Wideband Impulse Modulation and Receiver Algorithms for Multiuser Power Line Communications

    Directory of Open Access Journals (Sweden)

    Andrea M. Tonello

    2007-01-01

    Full Text Available We consider a bit-interleaved coded wideband impulse-modulated system for power line communications. Impulse modulation is combined with direct-sequence code-division multiple access (DS-CDMA to obtain a form of orthogonal modulation and to multiplex the users. We focus on the receiver signal processing algorithms and derive a maximum likelihood frequency-domain detector that takes into account the presence of impulse noise as well as the intercode interference (ICI and the multiple-access interference (MAI that are generated by the frequency-selective power line channel. To reduce complexity, we propose several simplified frequency-domain receiver algorithms with different complexity and performance. We address the problem of the practical estimation of the channel frequency response as well as the estimation of the correlation of the ICI-MAI-plus-noise that is needed in the detection metric. To improve the estimators performance, a simple hard feedback from the channel decoder is also used. Simulation results show that the scheme provides robust performance as a result of spreading the symbol energy both in frequency (through the wideband pulse and in time (through the spreading code and the bit-interleaved convolutional code.

  1. On-line adaptive line frequency noise cancellation from a nuclear power measuring channel

    Directory of Open Access Journals (Sweden)

    Qadir Javed

    2011-01-01

    Full Text Available On-line software for adaptively canceling 50 Hz line frequency noise has been designed and tested at Pakistan Research Reactor 1. Line frequency noise causes much problem in weak signals acquisition. Sometimes this noise is so dominant that original signal is totally corrupted. Although notch filter can be used for eliminating this noise, but if signal of interest is in close vicinity of 50 Hz, then original signal is also attenuated and hence overall performance is degraded. Adaptive noise removal is a technique which could be employed for removing line frequency without degrading the desired signal. In this paper line frequency noise has been eliminated on-line from a nuclear power measuring channel. The adaptive LMS algorithm has been used to cancel 50 Hz noise. The algorithm has been implemented in labVIEW with NI 6024 data acquisition card. The quality of the acquired signal has been improved much as can be seen in experimental results.

  2. Loop algorithms for quantum simulations of fermion models on lattices

    International Nuclear Information System (INIS)

    Kawashima, N.; Gubernatis, J.E.; Evertz, H.G.

    1994-01-01

    Two cluster algorithms, based on constructing and flipping loops, are presented for world-line quantum Monte Carlo simulations of fermions and are tested on the one-dimensional repulsive Hubbard model. We call these algorithms the loop-flip and loop-exchange algorithms. For these two algorithms and the standard world-line algorithm, we calculated the autocorrelation times for various physical quantities and found that the ordinary world-line algorithm, which uses only local moves, suffers from very long correlation times that makes not only the estimate of the error difficult but also the estimate of the average values themselves difficult. These difficulties are especially severe in the low-temperature, large-U regime. In contrast, we find that new algorithms, when used alone or in combinations with themselves and the standard algorithm, can have significantly smaller autocorrelation times, in some cases being smaller by three orders of magnitude. The new algorithms, which use nonlocal moves, are discussed from the point of view of a general prescription for developing cluster algorithms. The loop-flip algorithm is also shown to be ergodic and to belong to the grand canonical ensemble. Extensions to other models and higher dimensions are briefly discussed

  3. Line Balancing Using Largest Candidate Rule Algorithm In A Garment Industry: A Case Study

    Directory of Open Access Journals (Sweden)

    V. P.Jaganathan

    2014-12-01

    Full Text Available The emergence of fast changes in fashion has given rise to the need to shorten production cycle times in the garment industry. As effective usage of resources has a significant effect on the productivity and efficiency of production operations, garment manufacturers are urged to utilize their resources effectively in order to meet dynamic customer demand. This paper focuses specifically on line balancing and layout modification. The aim of assembly line balance in sewing lines is to assign tasks to the workstations, so that the machines of the workstation can perform the assigned tasks with a balanced loading. Largest Candidate Rule Algorithm (LCR has been deployed in this paper.

  4. CUDAMPF: a multi-tiered parallel framework for accelerating protein sequence search in HMMER on CUDA-enabled GPU.

    Science.gov (United States)

    Jiang, Hanyu; Ganesan, Narayan

    2016-02-27

    HMMER software suite is widely used for analysis of homologous protein and nucleotide sequences with high sensitivity. The latest version of hmmsearch in HMMER 3.x, utilizes heuristic-pipeline which consists of MSV/SSV (Multiple/Single ungapped Segment Viterbi) stage, P7Viterbi stage and the Forward scoring stage to accelerate homology detection. Since the latest version is highly optimized for performance on modern multi-core CPUs with SSE capabilities, only a few acceleration attempts report speedup. However, the most compute intensive tasks within the pipeline (viz., MSV/SSV and P7Viterbi stages) still stand to benefit from the computational capabilities of massively parallel processors. A Multi-Tiered Parallel Framework (CUDAMPF) implemented on CUDA-enabled GPUs presented here, offers a finer-grained parallelism for MSV/SSV and Viterbi algorithms. We couple SIMT (Single Instruction Multiple Threads) mechanism with SIMD (Single Instructions Multiple Data) video instructions with warp-synchronism to achieve high-throughput processing and eliminate thread idling. We also propose a hardware-aware optimal allocation scheme of scarce resources like on-chip memory and caches in order to boost performance and scalability of CUDAMPF. In addition, runtime compilation via NVRTC available with CUDA 7.0 is incorporated into the presented framework that not only helps unroll innermost loop to yield upto 2 to 3-fold speedup than static compilation but also enables dynamic loading and switching of kernels depending on the query model size, in order to achieve optimal performance. CUDAMPF is designed as a hardware-aware parallel framework for accelerating computational hotspots within the hmmsearch pipeline as well as other sequence alignment applications. It achieves significant speedup by exploiting hierarchical parallelism on single GPU and takes full advantage of limited resources based on their own performance features. In addition to exceeding performance of other

  5. R and D study on on-line criticality surveillance system (V)

    International Nuclear Information System (INIS)

    Yamada, Sumasu

    2001-02-01

    In view of necessity and importance of criticality surveillance systems for ensuring the safety of nuclear fuel manufacturing and reprocessing plants, 5-year basic studies and 4 year R and D studies on an on-line criticality surveillance system were carried out since 1991. This report is a summary of these series of studies. Noticing that the signal from a neutron detector is random in principle, these series of studies aimed to accumulate knowledge for developing an inexpensive criticality surveillance system with quick response based on the Auto-Regressive Moving Average (ARMA) model identification algorithm. During five-year basic studies on criticality surveillance system since 1991, we obtained knowledge required for developing a criticality surveillance system based on the ARMA model identification algorithm through 1) studies on recursive ARMA model identification algorithms most appropriate for estimating subcriticality form time series data under a steady state condition, 2) studies on pre-processing of signal from neutron detectors, 3) developing a new recursive ARMA model identification algorithm with small time delay to estimate time-dependent subcriticality, 4) proposing a basic concept for the elements required for an on-line criticality surveillance system, and 5) numerical analysis of data from the DCA experiments. During next four-year R and D studies on a criticality surveillance system since 1996, we 1) proposed modules required for a no-line criticality surveillance system, 2) revealed effectiveness of a adaptive digital filter (ADF) algorithm, as an important redundancy to the recursive ARMA model identification algorithm to be used in the signal processing module through numerical analysis of real data, 3) proposed a module of the Feynman-α method over γ ray signal and a fast signal processing module for γ ray signal, 4) developed a line-noise removal filter(Notch filter) and revealed its effectiveness for the DCA data corrupted with power-line

  6. Enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding for four-level holographic data storage systems

    Science.gov (United States)

    Kong, Gyuyeol; Choi, Sooyong

    2017-09-01

    An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.

  7. A rank-based algorithm of differential expression analysis for small cell line data with statistical control.

    Science.gov (United States)

    Li, Xiangyu; Cai, Hao; Wang, Xianlong; Ao, Lu; Guo, You; He, Jun; Gu, Yunyan; Qi, Lishuang; Guan, Qingzhou; Lin, Xu; Guo, Zheng

    2017-10-13

    To detect differentially expressed genes (DEGs) in small-scale cell line experiments, usually with only two or three technical replicates for each state, the commonly used statistical methods such as significance analysis of microarrays (SAM), limma and RankProd (RP) lack statistical power, while the fold change method lacks any statistical control. In this study, we demonstrated that the within-sample relative expression orderings (REOs) of gene pairs were highly stable among technical replicates of a cell line but often widely disrupted after certain treatments such like gene knockdown, gene transfection and drug treatment. Based on this finding, we customized the RankComp algorithm, previously designed for individualized differential expression analysis through REO comparison, to identify DEGs with certain statistical control for small-scale cell line data. In both simulated and real data, the new algorithm, named CellComp, exhibited high precision with much higher sensitivity than the original RankComp, SAM, limma and RP methods. Therefore, CellComp provides an efficient tool for analyzing small-scale cell line data. © The Author 2017. Published by Oxford University Press.

  8. Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm. (On-Line Harmonics Estimation Application

    Directory of Open Access Journals (Sweden)

    Eyad K Almaita

    2017-03-01

    Keywords: Energy efficiency, Power quality, Radial basis function, neural networks, adaptive, harmonic. Article History: Received Dec 15, 2016; Received in revised form Feb 2nd 2017; Accepted 13rd 2017; Available online How to Cite This Article: Almaita, E.K and Shawawreh J.Al (2017 Improving Stability and Convergence for Adaptive Radial Basis Function Neural Networks Algorithm (On-Line Harmonics Estimation Application.  International Journal of Renewable Energy Develeopment, 6(1, 9-17. http://dx.doi.org/10.14710/ijred.6.1.9-17

  9. On-line experimental validation of a model-based diagnostic algorithm dedicated to a solid oxide fuel cell system

    Science.gov (United States)

    Polverino, Pierpaolo; Esposito, Angelo; Pianese, Cesare; Ludwig, Bastian; Iwanschitz, Boris; Mai, Andreas

    2016-02-01

    In the current energetic scenario, Solid Oxide Fuel Cells (SOFCs) exhibit appealing features which make them suitable for environmental-friendly power production, especially for stationary applications. An example is represented by micro-combined heat and power (μ-CHP) generation units based on SOFC stacks, which are able to produce electric and thermal power with high efficiency and low pollutant and greenhouse gases emissions. However, the main limitations to their diffusion into the mass market consist in high maintenance and production costs and short lifetime. To improve these aspects, the current research activity focuses on the development of robust and generalizable diagnostic techniques, aimed at detecting and isolating faults within the entire system (i.e. SOFC stack and balance of plant). Coupled with appropriate recovery strategies, diagnosis can prevent undesired system shutdowns during faulty conditions, with consequent lifetime increase and maintenance costs reduction. This paper deals with the on-line experimental validation of a model-based diagnostic algorithm applied to a pre-commercial SOFC system. The proposed algorithm exploits a Fault Signature Matrix based on a Fault Tree Analysis and improved through fault simulations. The algorithm is characterized on the considered system and it is validated by means of experimental induction of faulty states in controlled conditions.

  10. Quantitative comparison of direct phase retrieval algorithms in in-line phase tomography

    International Nuclear Information System (INIS)

    Langer, Max; Cloetens, Peter; Guigay, Jean-Pierre; Peyrin, Francoise

    2008-01-01

    A well-known problem in x-ray microcomputed tomography is low sensitivity. Phase contrast imaging offers an increase of sensitivity of up to a factor of 10 3 in the hard x-ray region, which makes it possible to image soft tissue and small density variations. If a sufficiently coherent x-ray beam, such as that obtained from a third generation synchrotron, is used, phase contrast can be obtained by simply moving the detector downstream of the imaged object. This setup is known as in-line or propagation based phase contrast imaging. A quantitative relationship exists between the phase shift induced by the object and the recorded intensity and inversion of this relationship is called phase retrieval. Since the phase shift is proportional to projections through the three-dimensional refractive index distribution in the object, once the phase is retrieved, the refractive index can be reconstructed by using the phase as input to a tomographic reconstruction algorithm. A comparison between four phase retrieval algorithms is presented. The algorithms are based on the transport of intensity equation (TIE), transport of intensity equation for weak absorption, the contrast transfer function (CTF), and a mixed approach between the CTF and TIE, respectively. The compared methods all rely on linearization of the relationship between phase shift and recorded intensity to yield fast phase retrieval algorithms. The phase retrieval algorithms are compared using both simulated and experimental data, acquired at the European Synchrotron Radiation Facility third generation synchrotron light source. The algorithms are evaluated in terms of two different reconstruction error metrics. While being slightly less computationally effective, the mixed approach shows the best performance in terms of the chosen criteria.

  11. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....

  12. Improving the Power Quality in Tehran Metro Line-Two Using the Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    H. Ehteshami

    2017-12-01

    Full Text Available This research aims to survey the improvement of power quality in Tehran metro line 2 using the ant colony algorithm and to investigate all the factors affecting the achievement of this goal. In order to put Tehran on the road of sustainable development, finding a solution for dealing with air pollution is essential. The use of public transportation, especially metro, is one of the ways to achieve this goal. Since the highest share of pollutants in Tehran belongs to cars and mobile sources, relative statistical indicators are estimated through assuming the effect of metro lines development and subsequently reduction of traffic on power quality index.

  13. Control and monitoring of On-line Trigger Algorithms using gaucho

    CERN Document Server

    Van Herwijnen, Eric

    2005-01-01

    In the LHCb experiment, the trigger decisions are computed by Gaudi (the LHCb software framework) algorithms running on an event filter farm of around 2000 PCs. The control and monitoring of these algorithms has to be integrated in the overall experiment control system (ECS). To enable and facilitate this integration Gaucho, the GAUdi Component Helping Online, was developed. Gaucho consists of three parts: a C++ package integrated with Gaudi, the communications package DIM, and a set of PVSS panels and libraries. PVSS is a commercial SCADA system chosen as toolkit and framework for the LHCb controls system. The C++ package implements monitor service interface (IMonitorSvc) following the Gaudi specifications, with methods to declare variables and histograms for monitoring. Algorithms writers use them to indicate which quantities should be monitored. Since the interface resides in the GaudiKernel the code does not need changing if the monitoring services are not present. The Gaudi main job implements a state ma...

  14. Multi-objective optimization algorithms for mixed model assembly line balancing problem with parallel workstations

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2016-12-01

    Full Text Available This paper deals with mixed model assembly line (MMAL balancing problem of type-I. In MMALs several products are made on an assembly line while the similarity of these products is so high. As a result, it is possible to assemble several types of products simultaneously without any additional setup times. The problem has some particular features such as parallel workstations and precedence constraints in dynamic periods in which each period also effects on its next period. The research intends to reduce the number of workstations and maximize the workload smoothness between workstations. Dynamic periods are used to determine all variables in different periods to achieve efficient solutions. A non-dominated sorting genetic algorithm (NSGA-II and multi-objective particle swarm optimization (MOPSO are used to solve the problem. The proposed model is validated with GAMS software for small size problem and the performance of the foregoing algorithms is compared with each other based on some comparison metrics. The NSGA-II outperforms MOPSO with respect to some comparison metrics used in this paper, but in other metrics MOPSO is better than NSGA-II. Finally, conclusion and future research is provided.

  15. Efficient algorithm for generating spectra using line-by-line methods

    International Nuclear Information System (INIS)

    Sonnad, V.; Iglesias, C.A.

    2011-01-01

    A method is presented for efficient generation of spectra using line-by-line approaches. The only approximation is replacing the line shape function with an interpolation procedure, which makes the method independent of the line profile functional form. The resulting computational savings for large number of lines is proportional to the number of frequency points in the spectral range. Therefore, for large-scale problems the method can provide speedups of two orders of magnitude or more. A method was presented to generate line-by-line spectra efficiently. The first step was to replace the explicit calculation of the profile by the Newton divided-differences interpolating polynomial. The second step is to accumulate the lines effectively reducing their number to the number of frequency points. The final step is recognizing the resulting expression as a convolution and amenable to FFT methods. The reduction in computational effort for a configuration-to-configuration transition array with large number of lines is proportional to the number of frequency points. The method involves no approximations except for replacing the explicit profile evaluation by interpolation. Specifically, the line accumulation and convolution are exact given the interpolation procedure. Furthermore, the interpolation makes the method independent of the line profile functional form contrary to other schemes using FFT methods to generate line-by-line spectra but relying on the analytic form of the profile Fourier transform. Finally, the method relies on a uniform frequency mesh. For non-uniform frequency meshes, however, the method can be applied by using a suitable temporary uniform mesh and the results interpolated onto the final mesh with little additional cost.

  16. An automatic optimum number of well-distributed ground control lines selection procedure based on genetic algorithm

    Science.gov (United States)

    Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram

    2018-05-01

    The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic

  17. Application of Viterbi’s Algorithm for Predicting Rainfall Occurrence and Simulating Wet\\Dry Spells – Comparison with Common Methods

    Directory of Open Access Journals (Sweden)

    M. Ghamghami

    2015-06-01

    Full Text Available Today, there arevarious statistical models for the discrete simulation of the rainfall occurrence/non-occurrence with more emphasizing on long-term climatic statistics. Nevertheless, the accuracy of such models or predictions should be improved in short timescale. In the present paper, it is assumed that the rainfall occurrence/non-occurrence sequences follow a two-layer Hidden Markov Model (HMM consist of a hidden layer (discrete time series of rainfall occurrence and non-occurrence and an observable layer (weather variables, which is considered as a case study in Khoramabad station during the period of 1961-2005. The decoding algorithm of Viterbi has been used for simulation of wet/dry sequences. Performance of five weather variables, as the observable variables, including air pressure, vapor pressure, diurnal air temperature, relative humidity and dew point temperature for choosing the best observed variables were evaluated using some measures oferror evaluation. Results showed that the variable of diurnal air temperatureis the best observable variable for decoding process of wet/dry sequences, which detects the strong physical relationship between those variables. Also the Viterbi output was compared with ClimGen and LARS-WG weather generators, in terms of two accuracy measures including similarity of climatic statistics and forecasting skills. Finally, it is concluded that HMM has more skills rather than the other two weather generators in simulation of wet and dry spells. Therefore, we recommend the use of HMM instead of two other approaches for generation of wet and dry sequences.

  18. Detection of boiling by Piety's on-line PSD-pattern recognition algorithm applied to neutron noise signals in the SAPHIR reactor

    International Nuclear Information System (INIS)

    Spiekerman, G.

    1988-09-01

    A partial blockage of the cooling channels of a fuel element in a swimming pool reactor could lead to vapour generation and to burn-out. To detect such anomalies, a pattern recognition algorithm based on power spectra density (PSD) proposed by Piety was further developed and implemented on a PDP 11/23 for on-line applications. This algorithm identifies anomalies by measuring the PSD on the process signal and comparing them with a standard baseline previously formed. Up to 8 decision discriminants help to recognize spectral changes due to anomalies. In our application, to detect boiling as quickly as possible with sufficient sensitivity, Piety's algorithm was modified using overlapped Fast-Fourier-Transform-Processing and the averaging of the PSDs over a large sample of preceding instantaneous PSDs. This processing allows high sensitivity in detecting weak disturbances without reducing response time. The algorithm was tested with simulation-of-boiling experiments where nitrogen in a cooling channel of a mock-up of a fuel element was injected. Void fractions higher than 30 % in the channel can be detected. In the case of boiling, it is believed that this limit is lower because collapsing bubbles could give rise to stronger fluctuations. The algorithm was also tested with a boiling experiment where the reactor coolant flow was actually reduced. The results showed that the discriminant D5 of Piety's algorithm based on neutron noise obtained from the existing neutron chambers of the reactor control system could sensitively recognize boiling. The detection time amounts to 7-30 s depending on the strength of the disturbances. Other events, which arise during a normal reactor run like scrams, removal of isotope elements without scramming or control rod movements and which could lead to false alarms, can be distinguished from boiling. 49 refs., 104 figs., 5 tabs

  19. Verification of fluid-structure-interaction algorithms through the method of manufactured solutions for actuator-line applications

    Science.gov (United States)

    Vijayakumar, Ganesh; Sprague, Michael

    2017-11-01

    Demonstrating expected convergence rates with spatial- and temporal-grid refinement is the ``gold standard'' of code and algorithm verification. However, the lack of analytical solutions and generating manufactured solutions presents challenges for verifying codes for complex systems. The application of the method of manufactured solutions (MMS) for verification for coupled multi-physics phenomena like fluid-structure interaction (FSI) has only seen recent investigation. While many FSI algorithms for aeroelastic phenomena have focused on boundary-resolved CFD simulations, the actuator-line representation of the structure is widely used for FSI simulations in wind-energy research. In this work, we demonstrate the verification of an FSI algorithm using MMS for actuator-line CFD simulations with a simplified structural model. We use a manufactured solution for the fluid velocity field and the displacement of the SMD system. We demonstrate the convergence of both the fluid and structural solver to second-order accuracy with grid and time-step refinement. This work was funded by the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Wind Energy Technologies Office, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  20. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  1. On-line monitoring of extraction process of Flos Lonicerae Japonicae using near infrared spectroscopy combined with synergy interval PLS and genetic algorithm

    Science.gov (United States)

    Yang, Yue; Wang, Lei; Wu, Yongjiang; Liu, Xuesong; Bi, Yuan; Xiao, Wei; Chen, Yong

    2017-07-01

    There is a growing need for the effective on-line process monitoring during the manufacture of traditional Chinese medicine to ensure quality consistency. In this study, the potential of near infrared (NIR) spectroscopy technique to monitor the extraction process of Flos Lonicerae Japonicae was investigated. A new algorithm of synergy interval PLS with genetic algorithm (Si-GA-PLS) was proposed for modeling. Four different PLS models, namely Full-PLS, Si-PLS, GA-PLS, and Si-GA-PLS, were established, and their performances in predicting two quality parameters (viz. total acid and soluble solid contents) were compared. In conclusion, Si-GA-PLS model got the best results due to the combination of superiority of Si-PLS and GA. For Si-GA-PLS, the determination coefficient (Rp2) and root-mean-square error for the prediction set (RMSEP) were 0.9561 and 147.6544 μg/ml for total acid, 0.9062 and 0.1078% for soluble solid contents, correspondingly. The overall results demonstrated that the NIR spectroscopy technique combined with Si-GA-PLS calibration is a reliable and non-destructive alternative method for on-line monitoring of the extraction process of TCM on the production scale.

  2. Diagnostic performance of line-immunoassay based algorithms for incident HIV-1 infection

    Directory of Open Access Journals (Sweden)

    Schüpbach Jörg

    2012-04-01

    Full Text Available Abstract Background Serologic testing algorithms for recent HIV seroconversion (STARHS provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident ( Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and

  3. Keyword Query Expansion Paradigm Based on Recommendation and Interpretation in Relational Databases

    Directory of Open Access Journals (Sweden)

    Yingqi Wang

    2017-01-01

    Full Text Available Due to the ambiguity and impreciseness of keyword query in relational databases, the research on keyword query expansion has attracted wide attention. Existing query expansion methods expose users’ query intention to a certain extent, but most of them cannot balance the precision and recall. To address this problem, a novel two-step query expansion approach is proposed based on query recommendation and query interpretation. First, a probabilistic recommendation algorithm is put forward by constructing a term similarity matrix and Viterbi model. Second, by using the translation algorithm of triples and construction algorithm of query subgraphs, query keywords are translated to query subgraphs with structural and semantic information. Finally, experimental results on a real-world dataset demonstrate the effectiveness and rationality of the proposed method.

  4. Optimal Scheduling of Material Handling Devices in a PCB Production Line: Problem Formulation and a Polynomial Algorithm

    Directory of Open Access Journals (Sweden)

    Ada Che

    2008-01-01

    Full Text Available Modern automated production lines usually use one or multiple computer-controlled robots or hoists for material handling between workstations. A typical application of such lines is an automated electroplating line for processing printed circuit boards (PCBs. In these systems, cyclic production policy is widely used due to large lot size and simplicity of implementation. This paper addresses cyclic scheduling of a multihoist electroplating line with constant processing times. The objective is to minimize the cycle time, or equivalently to maximize the production throughput, for a given number of hoists. We propose a mathematical model and a polynomial algorithm for this scheduling problem. Computational results on randomly generated instances are reported.

  5. High data rate coding for the space station telemetry links.

    Science.gov (United States)

    Lumb, D. R.; Viterbi, A. J.

    1971-01-01

    Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.

  6. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    Science.gov (United States)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  7. A Weighing Algorithm for Checking Missing Components in a Pharmaceutical Line

    Directory of Open Access Journals (Sweden)

    Alessandro Silvestri

    2014-11-01

    image. The goal of the present work is the development of an algorithm able to optimize the production line of a pharmaceutical firm. In particular, the proposed weighing procedure allows both checking missing components in packaging and minimizing false rejects of packages by dynamic scales. The main problem is the presence at the same time, in the same package, of different components with different variable weights. The consequence is uncertainty in recognizing the absence of one or more components.

  8. Adaptive calibration method with on-line growing complexity

    Directory of Open Access Journals (Sweden)

    Šika Z.

    2011-12-01

    Full Text Available This paper describes a modified variant of a kinematical calibration algorithm. In the beginning, a brief review of the calibration algorithm and its simple modification are described. As the described calibration modification uses some ideas used by the Lolimot algorithm, the algorithm is described and explained. Main topic of this paper is a description of a synthesis of the Lolimot-based calibration that leads to an adaptive algorithm with an on-line growing complexity. The paper contains a comparison of simple examples results and a discussion. A note about future research topics is also included.

  9. A Nonmonotone Line Search Filter Algorithm for the System of Nonlinear Equations

    Directory of Open Access Journals (Sweden)

    Zhong Jin

    2012-01-01

    Full Text Available We present a new iterative method based on the line search filter method with the nonmonotone strategy to solve the system of nonlinear equations. The equations are divided into two groups; some equations are treated as constraints and the others act as the objective function, and the two groups are just updated at the iterations where it is needed indeed. We employ the nonmonotone idea to the sufficient reduction conditions and filter technique which leads to a flexibility and acceptance behavior comparable to monotone methods. The new algorithm is shown to be globally convergent and numerical experiments demonstrate its effectiveness.

  10. Optimization of line configuration and balancing for flexible machining lines

    Science.gov (United States)

    Liu, Xuemei; Li, Aiping; Chen, Zurui

    2016-05-01

    Line configuration and balancing is to select the type of line and allot a given set of operations as well as machines to a sequence of workstations to realize high-efficiency production. Most of the current researches for machining line configuration and balancing problems are related to dedicated transfer lines with dedicated machine workstations. With growing trends towards great product variety and fluctuations in market demand, dedicated transfer lines are being replaced with flexible machining line composed of identical CNC machines. This paper deals with the line configuration and balancing problem for flexible machining lines. The objective is to assign operations to workstations and find the sequence of execution, specify the number of machines in each workstation while minimizing the line cycle time and total number of machines. This problem is subject to precedence, clustering, accessibility and capacity constraints among the features, operations, setups and workstations. The mathematical model and heuristic algorithm based on feature group strategy and polychromatic sets theory are presented to find an optimal solution. The feature group strategy and polychromatic sets theory are used to establish constraint model. A heuristic operations sequencing and assignment algorithm is given. An industrial case study is carried out, and multiple optimal solutions in different line configurations are obtained. The case studying results show that the solutions with shorter cycle time and higher line balancing rate demonstrate the feasibility and effectiveness of the proposed algorithm. This research proposes a heuristic line configuration and balancing algorithm based on feature group strategy and polychromatic sets theory which is able to provide better solutions while achieving an improvement in computing time.

  11. Deciding the On-line Chromatic Number of a Graph with Pre-coloring is PSPACE-complete

    DEFF Research Database (Denmark)

    Kudahl, Christian

    2015-01-01

    In an on-line coloring, the vertices of a graph are revealed one by one. An algorithm assigns a color to each vertex after it is revealed. When a vertex is revealed, it is also revealed which of the previous vertices it is adjacent to. The on-line chromatic number of a graph, G, is the smallest...... number of colors an algorithm will need when on-line-coloring G. The algorithm may know G, but not the order in which the vertices are revealed. The problem of determining if the on-line chromatic number of a graph is less than or equal to k, given a pre-coloring, is shown to be PSPACE-complete....

  12. Combined mixed approach algorithm for in-line phase-contrast x-ray imaging

    International Nuclear Information System (INIS)

    De Caro, Liberato; Scattarella, Francesco; Giannini, Cinzia; Tangaro, Sabina; Rigon, Luigi; Longo, Renata; Bellotti, Roberto

    2010-01-01

    Purpose: In the past decade, phase-contrast imaging (PCI) has been applied to study different kinds of tissues and human body parts, with an increased improvement of the image quality with respect to simple absorption radiography. A technique closely related to PCI is phase-retrieval imaging (PRI). Indeed, PCI is an imaging modality thought to enhance the total contrast of the images through the phase shift introduced by the object (human body part); PRI is a mathematical technique to extract the quantitative phase-shift map from PCI. A new phase-retrieval algorithm for the in-line phase-contrast x-ray imaging is here proposed. Methods: The proposed algorithm is based on a mixed transfer-function and transport-of-intensity approach (MA) and it requires, at most, an initial approximate estimate of the average phase shift introduced by the object as prior knowledge. The accuracy in the initial estimate determines the convergence speed of the algorithm. The proposed algorithm retrieves both the object phase and its complex conjugate in a combined MA (CMA). Results: Although slightly less computationally effective with respect to other mixed-approach algorithms, as two phases have to be retrieved, the results obtained by the CMA on simulated data have shown that the obtained reconstructed phase maps are characterized by particularly low normalized mean square errors. The authors have also tested the CMA on noisy experimental phase-contrast data obtained by a suitable weakly absorbing sample consisting of a grid of submillimetric nylon fibers as well as on a strongly absorbing object made of a 0.03 mm thick lead x-ray resolution star pattern. The CMA has shown a good efficiency in recovering phase information, also in presence of noisy data, characterized by peak-to-peak signal-to-noise ratios down to a few dBs, showing the possibility to enhance with phase radiography the signal-to-noise ratio for features in the submillimetric scale with respect to the attenuation

  13. On-line transient stability assessment of large-scale power systems by using ball vector machines

    International Nuclear Information System (INIS)

    Mohammadi, M.; Gharehpetian, G.B.

    2010-01-01

    In this paper ball vector machine (BVM) has been used for on-line transient stability assessment of large-scale power systems. To classify the system transient security status, a BVM has been trained for all contingencies. The proposed BVM based security assessment algorithm has very small training time and space in comparison with artificial neural networks (ANN), support vector machines (SVM) and other machine learning based algorithms. In addition, the proposed algorithm has less support vectors (SV) and therefore is faster than existing algorithms for on-line applications. One of the main points, to apply a machine learning method is feature selection. In this paper, a new Decision Tree (DT) based feature selection technique has been presented. The proposed BVM based algorithm has been applied to New England 39-bus power system. The simulation results show the effectiveness and the stability of the proposed method for on-line transient stability assessment procedure of large-scale power system. The proposed feature selection algorithm has been compared with different feature selection algorithms. The simulation results demonstrate the effectiveness of the proposed feature algorithm.

  14. R and D study on on-line criticality surveillance system (III)

    International Nuclear Information System (INIS)

    Yamada, Sumasu

    1999-02-01

    The Criticality Surveillance System should have high reliability and high expandability enable to apply new approaches. Under this basic concept, a basic design of the Criticality Surveillance System has been proposed in 1997. We propose here several new ideas of modification to the original design of the Criticality Surveillance System, and also reports some results of numerical analysis over the DCA experiments in 1996. In this report, first, we proposed the modification of the Criticality Surveillance System by adding a series of modules for the usage of the Feynman-α method over the γ-ray signal, because recent studies revealed that the statistical analysis of the γ-ray signal in time domain can provide very good information of the reactor decay constant even in the case of low count rate. This modification could increase the reliability of the Criticality Surveillance System over the wide range of amount of the reprocessed fuel in the tank. Last year we have developed an executable ADF program based on the Least Mean Squares, as the redundancy to the recursive ARMA model identification algorithm. However, this algorithm has a property that the convergence rate of parameter estimation becomes very slow when input data is corrupted with colored noise. To cope with this problem, we introduced a new ADF algorithm called Block ADF algorithm, which is developed to improve the convergence rate with less computational load to the computer. In Chapter 2, we showed the basic theory and a fast adaptive filter algorithm using eigenvalue reciprocals as stepsizes in Chapter 3. This new algorithm provides a stable parameter estimates with less computation as fast as the Recursive Least Squares Method. Thirdly, we introduced a notch filter for power line noise and a software. This filter has to be developed for each specific sampling frequency and specific power line frequency, however, this filter can remove not only the line spectrum of the power line frequency but also

  15. Optimal design of the rotor geometry of line-start permanent magnet synchronous motor using the bat algorithm

    Science.gov (United States)

    Knypiński, Łukasz

    2017-12-01

    In this paper an algorithm for the optimization of excitation system of line-start permanent magnet synchronous motors will be presented. For the basis of this algorithm, software was developed in the Borland Delphi environment. The software consists of two independent modules: an optimization solver, and a module including the mathematical model of a synchronous motor with a self-start ability. The optimization module contains the bat algorithm procedure. The mathematical model of the motor has been developed in an Ansys Maxwell environment. In order to determine the functional parameters of the motor, additional scripts in Visual Basic language were developed. Selected results of the optimization calculation are presented and compared with results for the particle swarm optimization algorithm.

  16. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    Science.gov (United States)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean

  17. Multispectral fluorescence image algorithms for detection of frass on mature tomatoes

    Science.gov (United States)

    A multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at five wavebands, 515 nm, 640 nm, 664 nm, 690 nm, and 724 nm...

  18. Practical algorithms for simulation and reconstruction of digital in-line holograms.

    Science.gov (United States)

    Latychevskaia, Tatiana; Fink, Hans-Werner

    2015-03-20

    Here we present practical methods for simulation and reconstruction of in-line digital holograms recorded with plane and spherical waves. The algorithms described here are applicable to holographic imaging of an object exhibiting absorption as well as phase-shifting properties. Optimal parameters, related to distances, sampling rate, and other factors for successful simulation and reconstruction of holograms are evaluated and criteria for the achievable resolution are worked out. Moreover, we show that the numerical procedures for the reconstruction of holograms recorded with plane and spherical waves are identical under certain conditions. Experimental examples of holograms and their reconstructions are also discussed.

  19. A Constraint Model for Constrained Hidden Markov Models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2009-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we extend HMMs with constraints and show how the familiar Viterbi algorithm can be generalized, based on constraint solving ...

  20. Electromagnetic interference produced by power or electrified railway lines on metallic pipe networks

    International Nuclear Information System (INIS)

    Lucca, G.

    1999-01-01

    The paper presents an algorithm for the calculation, in the frequency domain, of the induced voltages and currents on a generic metallic pipe network exposed to the electromagnetic interference from a power line or an electrified railway line. By assuming as known the voltages and the currents on the inducing line, the algorithm may be subdivided into the following main steps: a) determination of the ideal electromotive force and current generators to be applied to the induced structure in order to represent the electromagnetic influence from the inducing line; b) modelling of the pipe network by means of a suitable equivalent electric network; c) calculation of voltages and currents on the induced network [it

  1. A parallel line sieve for the GNFS Algorithm

    OpenAIRE

    Sameh Daoud; Ibrahim Gad

    2014-01-01

    RSA is one of the most important public key cryptosystems for information security. The security of RSA depends on Integer factorization problem, it relies on the difficulty of factoring large integers. Much research has gone into problem of factoring a large number. Due to advances in factoring algorithms and advances in computing hardware the size of the number that can be factorized increases exponentially year by year. The General Number Field Sieve algorithm (GNFS) is currently the best ...

  2. The on-line asymmetric traveling salesman problem

    NARCIS (Netherlands)

    Ausiello, G.; Bonifaci, V.; Laura, L.

    2008-01-01

    We consider two on-line versions of the asymmetric traveling salesman problem with triangle inequality. For the homing version, in which the salesman is required to return in the city where it started from, we give a -competitive algorithm and prove that this is best possible. For the nomadic

  3. State Estimation-based Transmission line parameter identification

    Directory of Open Access Journals (Sweden)

    Fredy Andrés Olarte Dussán

    2010-01-01

    Full Text Available This article presents two state-estimation-based algorithms for identifying transmission line parameters. The identification technique used simultaneous state-parameter estimation on an artificial power system composed of several copies of the same transmission line, using measurements at different points in time. The first algorithm used active and reactive power measurements at both ends of the line. The second method used synchronised phasor voltage and current measurements at both ends. The algorithms were tested in simulated conditions on the 30-node IEEE test system. All line parameters for this system were estimated with errors below 1%.

  4. On-line monitoring the extract process of Fu-fang Shuanghua oral solution using near infrared spectroscopy and different PLS algorithms

    Science.gov (United States)

    Kang, Qian; Ru, Qingguo; Liu, Yan; Xu, Lingyan; Liu, Jia; Wang, Yifei; Zhang, Yewen; Li, Hui; Zhang, Qing; Wu, Qing

    2016-01-01

    An on-line near infrared (NIR) spectroscopy monitoring method with an appropriate multivariate calibration method was developed for the extraction process of Fu-fang Shuanghua oral solution (FSOS). On-line NIR spectra were collected through two fiber optic probes, which were designed to transmit NIR radiation by a 2 mm flange. Partial least squares (PLS), interval PLS (iPLS) and synergy interval PLS (siPLS) algorithms were used comparatively for building the calibration regression models. During the extraction process, the feasibility of NIR spectroscopy was employed to determine the concentrations of chlorogenic acid (CA) content, total phenolic acids contents (TPC), total flavonoids contents (TFC) and soluble solid contents (SSC). High performance liquid chromatography (HPLC), ultraviolet spectrophotometric method (UV) and loss on drying methods were employed as reference methods. Experiment results showed that the performance of siPLS model is the best compared with PLS and iPLS. The calibration models for AC, TPC, TFC and SSC had high values of determination coefficients of (R2) (0.9948, 0.9992, 0.9950 and 0.9832) and low root mean square error of cross validation (RMSECV) (0.0113, 0.0341, 0.1787 and 1.2158), which indicate a good correlation between reference values and NIR predicted values. The overall results show that the on line detection method could be feasible in real application and would be of great value for monitoring the mixed decoction process of FSOS and other Chinese patent medicines.

  5. Single-phased Fault Location on Transmission Lines Using Unsynchronized Voltages

    Directory of Open Access Journals (Sweden)

    ISTRATE, M.

    2009-10-01

    Full Text Available The increased accuracy into the fault's detection and location makes it easier for maintenance, this being the reason to develop new possibilities for a precise estimation of the fault location. In the field literature, many methods for fault location using voltages and currents measurements at one or both terminals of power grids' lines are presented. The double-end synchronized data algorithms are very precise, but the current transformers can limit the accuracy of these estimations. The paper presents an algorithm to estimate the location of the single-phased faults which uses only voltage measurements at both terminals of the transmission lines by eliminating the error due to current transformers and without introducing the restriction of perfect data synchronization. In such conditions, the algorithm can be used with the actual equipment of the most power grids, the installation of phasor measurement units with GPS system synchronized timer not being compulsory. Only the positive sequence of line parameters and sources are used, thus, eliminating the incertitude in zero sequence parameter estimation. The algorithm is tested using the results of EMTP-ATP simulations, after the validation of the ATP models on the basis of registered results in a real power grid.

  6. DiversePathsJ: diverse shortest paths for bioimage analysis.

    Science.gov (United States)

    Uhlmann, Virginie; Haubold, Carsten; Hamprecht, Fred A; Unser, Michael

    2018-02-01

    We introduce a formulation for the general task of finding diverse shortest paths between two end-points. Our approach is not linked to a specific biological problem and can be applied to a large variety of images thanks to its generic implementation as a user-friendly ImageJ/Fiji plugin. It relies on the introduction of additional layers in a Viterbi path graph, which requires slight modifications to the standard Viterbi algorithm rules. This layered graph construction allows for the specification of various constraints imposing diversity between solutions. The software allows obtaining a collection of diverse shortest paths under some user-defined constraints through a convenient and user-friendly interface. It can be used alone or be integrated into larger image analysis pipelines. http://bigwww.epfl.ch/algorithms/diversepathsj. michael.unser@epfl.ch or fred.hamprecht@iwr.uni-heidelberg.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  7. The design of the public transport lines with the use of the fast genetic algorithm

    Directory of Open Access Journals (Sweden)

    Aleksander Król

    2015-09-01

    Full Text Available Background: The growing role of public transport and the pressure of economic criteria requires the new optimization tools for process of public transport planning. These problems are computationally very complex, thus it is preferable to use various approximate methods, leading to a good solution within an acceptable time. Methods: One of such method is the genetic algorithm mimicking the processes of evolution and natural selection in the nature. In this paper, the different variants of the public transport lines layout are subjected to the artificial selection. The essence of the proposed approach is a simplified method of calculating the value of the fit function for a single individual, which brings relatively short computation time even for large jobs. Results: It was shown that despite the introduced simplifications the quality of the results is not worsened. Using the data obtained from KZK GOP (Communications Municipal Association of Upper Silesian Industrial Region the described algorithm was used to optimize the layout of the network of bus lines located within the borders of Katowice. Conclusion: The proposed algorithm was applied to a real, very complex network of public transportation and a possibility of a significant improvement of its efficiency was indicated. The obtained results give hope that the presented model, after some improvements can be the basis of the scientific method, and in a consequence of a further development to find practical application.

  8. Cluster algorithms with empahsis on quantum spin systems

    International Nuclear Information System (INIS)

    Gubernatis, J.E.; Kawashima, Naoki

    1995-01-01

    The purpose of this lecture is to discuss in detail the generalized approach of Kawashima and Gubernatis for the construction of cluster algorithms. We first present a brief refresher on the Monte Carlo method, describe the Swendsen-Wang algorithm, show how this algorithm follows from the Fortuin-Kastelyn transformation, and re=interpret this transformation in a form which is the basis of the generalized approach. We then derive the essential equations of the generalized approach. This derivation is remarkably simple if done from the viewpoint of probability theory, and the essential assumptions will be clearly stated. These assumptions are implicit in all useful cluster algorithms of which we are aware. They lead to a quite different perspective on cluster algorithms than found in the seminal works and in Ising model applications. Next, we illustrate how the generalized approach leads to a cluster algorithm for world-line quantum Monte Carlo simulations of Heisenberg models with S = 1/2. More succinctly, we also discuss the generalization of the Fortuin- Kasetelyn transformation to higher spin models and illustrate the essential steps for a S = 1 Heisenberg model. Finally, we summarize how to go beyond S = 1 to a general spin, XYZ model

  9. An inertia-free filter line-search algorithm for large-scale nonlinear programming

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-02-15

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection via symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.

  10. Adaptive decoding of convolutional codes

    Science.gov (United States)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  11. 5th Computer Science On-line Conference

    CERN Document Server

    Senkerik, Roman; Oplatkova, Zuzana; Silhavy, Petr; Prokopova, Zdenka

    2016-01-01

    This volume is based on the research papers presented in the 5th Computer Science On-line Conference. The volume Artificial Intelligence Perspectives in Intelligent Systems presents modern trends and methods to real-world problems, and in particular, exploratory research that describes novel approaches in the field of artificial intelligence. New algorithms in a variety of fields are also presented. The Computer Science On-line Conference (CSOC 2016) is intended to provide an international forum for discussions on the latest research results in all areas related to Computer Science. The addressed topics are the theoretical aspects and applications of Computer Science, Artificial Intelligences, Cybernetics, Automation Control Theory and Software Engineering.

  12. Research on UAV Intelligent Obstacle Avoidance Technology During Inspection of Transmission Line

    Science.gov (United States)

    Wei, Chuanhu; Zhang, Fei; Yin, Chaoyuan; Liu, Yue; Liu, Liang; Li, Zongyu; Wang, Wanguo

    Autonomous obstacle avoidance of unmanned aerial vehicle (hereinafter referred to as UAV) in electric power line inspection process has important significance for operation safety and economy for UAV intelligent inspection system of transmission line as main content of UAV intelligent inspection system on transmission line. In the paper, principles of UAV inspection obstacle avoidance technology of transmission line are introduced. UAV inspection obstacle avoidance technology based on particle swarm global optimization algorithm is proposed after common obstacle avoidance technologies are studied. Stimulation comparison is implemented with traditional UAV inspection obstacle avoidance technology which adopts artificial potential field method. Results show that UAV inspection strategy of particle swarm optimization algorithm, adopted in the paper, is prominently better than UAV inspection strategy of artificial potential field method in the aspects of obstacle avoidance effect and the ability of returning to preset inspection track after passing through the obstacle. An effective method is provided for UAV inspection obstacle avoidance of transmission line.

  13. A Review of Related Work on Machine Learning in Semiconductor Manufacturing and Assembly Lines

    OpenAIRE

    Stanisavljevic, Darko; Spitzer, Michael

    2017-01-01

    This paper deals with applications of machine learning algorithms in manufacturing. Machine learning can be defined as a field of computer science that gives computers the ability to learn without explicitly developing the needed algorithms. Manufacturing is the production of merchandise by manual labour, machines and tools. The focus of this paper is on automatic production lines. The areas of interest of this paper are semiconductor manufacturing and production on assembly lines. The purpos...

  14. A Novel Algorithm for Power Flow Transferring Identification Based on WAMS

    Directory of Open Access Journals (Sweden)

    Xu Yan

    2015-01-01

    Full Text Available After a faulted transmission line is removed, power flow on it will be transferred to other lines in the network. If those lines are heavily loaded beforehand, the transferred flow may cause the nonfault overload and the incorrect operation of far-ranging backup relays, which are considered as the key factors leading to cascading trips. In this paper, a novel algorithm for power flow transferring identification based on wide area measurement system (WAMS is proposed, through which the possible incorrect tripping of backup relays will be blocked in time. A new concept of Transferred Flow Characteristic Ratio (TFCR is presented and is applied to the identification criteria. Mathematical derivation of TFCR is carried out in detail by utilization of power system short circuit fault modeling. The feasibility and effectiveness of the proposed algorithm to prevent the malfunction of backup relays are demonstrated by a large number of simulations.

  15. Statistics-based optimization of the polarimetric radar hydrometeor classification algorithm and its application for a squall line in South China

    Science.gov (United States)

    Wu, Chong; Liu, Liping; Wei, Ming; Xi, Baozhu; Yu, Minghui

    2018-03-01

    A modified hydrometeor classification algorithm (HCA) is developed in this study for Chinese polarimetric radars. This algorithm is based on the U.S. operational HCA. Meanwhile, the methodology of statistics-based optimization is proposed including calibration checking, datasets selection, membership functions modification, computation thresholds modification, and effect verification. Zhuhai radar, the first operational polarimetric radar in South China, applies these procedures. The systematic bias of calibration is corrected, the reliability of radar measurements deteriorates when the signal-to-noise ratio is low, and correlation coefficient within the melting layer is usually lower than that of the U.S. WSR-88D radar. Through modification based on statistical analysis of polarimetric variables, the localized HCA especially for Zhuhai is obtained, and it performs well over a one-month test through comparison with sounding and surface observations. The algorithm is then utilized for analysis of a squall line process on 11 May 2014 and is found to provide reasonable details with respect to horizontal and vertical structures, and the HCA results—especially in the mixed rain-hail region—can reflect the life cycle of the squall line. In addition, the kinematic and microphysical processes of cloud evolution and the differences between radar-detected hail and surface observations are also analyzed. The results of this study provide evidence for the improvement of this HCA developed specifically for China.

  16. Truncation correction for oblique filtering lines

    International Nuclear Information System (INIS)

    Hoppe, Stefan; Hornegger, Joachim; Lauritsch, Guenter; Dennerlein, Frank; Noo, Frederic

    2008-01-01

    State-of-the-art filtered backprojection (FBP) algorithms often define the filtering operation to be performed along oblique filtering lines in the detector. A limited scan field of view leads to the truncation of those filtering lines, which causes artifacts in the final reconstructed volume. In contrast to the case where filtering is performed solely along the detector rows, no methods are available for the case of oblique filtering lines. In this work, the authors present two novel truncation correction methods which effectively handle data truncation in this case. Method 1 (basic approach) handles data truncation in two successive preprocessing steps by applying a hybrid data extrapolation method, which is a combination of a water cylinder extrapolation and a Gaussian extrapolation. It is independent of any specific reconstruction algorithm. Method 2 (kink approach) uses similar concepts for data extrapolation as the basic approach but needs to be integrated into the reconstruction algorithm. Experiments are presented from simulated data of the FORBILD head phantom, acquired along a partial-circle-plus-arc trajectory. The theoretically exact M-line algorithm is used for reconstruction. Although the discussion is focused on theoretically exact algorithms, the proposed truncation correction methods can be applied to any FBP algorithm that exposes oblique filtering lines.

  17. Lower Bounds and Semi On-line Multiprocessor Scheduling

    Directory of Open Access Journals (Sweden)

    T.C. Edwin Cheng

    2003-10-01

    Full Text Available We are given a set of identical machines and a sequence of jobs from which we know the sum of the job weights in advance. The jobs have to be assigned on-line to one of the machines and the objective is to minimize the makespan. An algorithm with performance ratio 1.6 and a lower bound of 1.5 is presented. This improves recent results by Azar and Regev who published an algorithm with performance ratio 1.625 for the less general problem that the optimal makespan is known in advance.

  18. On-line supercapacitor dynamic models for energy conversion and management

    International Nuclear Information System (INIS)

    Wu, C.H.; Hung, Y.H.; Hong, C.W.

    2012-01-01

    Highlights: ► On-line supercapacitor dynamic models are derived from time and frequency domains. ► Equivalent circuits with an ANN identifier are derived for nonlinear effects. ► Nonlinear effects include environmental temperature and operating voltage. ► Supercapacitor models can achieve both system fidelity and computation efficiency. - Abstract: This paper develops on-line nonlinear dynamic models of electrochemical supercapacitors which are for energy conversion and management. Based on the theory of electrochemical impedance spectroscopy, extensive alternative current impedance tests have been conducted to investigate the frequency-domain dynamics of these supercapacitors. A Nyquist diagram is plotted to help establish an equivalent electric circuit, which is regarded as the first-phase linear model. Two performance-influencing factors, environmental temperature and operating voltage, are considered as nonlinear effects. The nonlinear relationships among parameters of the capacitances and resistances in the first-phase model are established by a multi-layer artificial neural network. The neural parameters are trained using a back-propagation algorithm by feeding the experimental data bank. Combining the first-phase model and the on-line neural “parameter identifier”, the algorithm produces an on-line nonlinear dynamic model. Simulation results have proved that this proposed model is able to achieve both system fidelity and computational efficiency.

  19. Time-Varying FOPDT Modeling and On-line Parameter Identification

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Sun, Zhen

    2013-01-01

    on the Mixed-Integer-Nonlinear Programming, Least-Mean-Square and sliding window techniques. The proposed approaches can simultaneously estimate the time-dependent system parameters, as well as the unknown disturbance input if it is the case, in an on-line manner. The proposed concepts and algorithms...

  20. One Terminal Digital Algorithm for Adaptive Single Pole Auto-Reclosing Based on Zero Sequence Voltage

    Directory of Open Access Journals (Sweden)

    S. Jamali

    2008-10-01

    Full Text Available This paper presents an algorithm for adaptive determination of the dead timeduring transient arcing faults and blocking automatic reclosing during permanent faults onoverhead transmission lines. The discrimination between transient and permanent faults ismade by the zero sequence voltage measured at the relay point. If the fault is recognised asan arcing one, then the third harmonic of the zero sequence voltage is used to evaluate theextinction time of the secondary arc and to initiate reclosing signal. The significantadvantage of this algorithm is that it uses an adaptive threshold level and therefore itsperformance is independent of fault location, line parameters and the system operatingconditions. The proposed algorithm has been successfully tested under a variety of faultlocations and load angles on a 400KV overhead line using Electro-Magnetic TransientProgram (EMTP. The test results validate the algorithm ability in determining thesecondary arc extinction time during transient faults as well as blocking unsuccessfulautomatic reclosing during permanent faults.

  1. Optical character recognition of handwritten Arabic using hidden Markov models

    Science.gov (United States)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M.

    2011-04-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.

  2. Almagest, a new trackless ring finding algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lamanna, G., E-mail: gianluca.lamanna@cern.ch

    2014-12-01

    A fast ring finding algorithm is a crucial point to allow the use of RICH in on-line trigger selection. The present algorithms are either too slow (with respect to the incoming data rate) or need the information coming from a tracking system. Digital image techniques, assuming limited computing power (as for example Hough transform), are not perfectly robust for what concerns the noise immunity. We present a novel technique based on Ptolemy's theorem for multi-ring pattern recognition. Starting from purely geometrical considerations, this algorithm (also known as “Almagest”) allows fast and trackless rings reconstruction, with spatial resolution comparable with other offline techniques. Almagest is particularly suitable for parallel implementation on multi-cores machines. Preliminary tests on GPUs (multi-cores video card processors) show that, thanks to an execution time smaller than 10 μs per event, this algorithm could be employed for on-line selection in trigger systems. The user case of the NA62 RICH trigger, based on GPU, will be discussed. - Highlights: • A new algorithm for fast multiple ring searching in RICH detectors is presented. • The Almagest algorithm exploits the computing power of Graphics processers (GPUs). • A preliminary implementation for on-line triggering in the NA62 experiment shows encouraging results.

  3. Power of automated algorithms for combining time-line follow-back and urine drug screening test results in stimulant-abuse clinical trials.

    Science.gov (United States)

    Oden, Neal L; VanVeldhuisen, Paul C; Wakim, Paul G; Trivedi, Madhukar H; Somoza, Eugene; Lewis, Daniel

    2011-09-01

    In clinical trials of treatment for stimulant abuse, researchers commonly record both Time-Line Follow-Back (TLFB) self-reports and urine drug screen (UDS) results. To compare the power of self-report, qualitative (use vs. no use) UDS assessment, and various algorithms to generate self-report-UDS composite measures to detect treatment differences via t-test in simulated clinical trial data. We performed Monte Carlo simulations patterned in part on real data to model self-report reliability, UDS errors, dropout, informatively missing UDS reports, incomplete adherence to a urine donation schedule, temporal correlation of drug use, number of days in the study period, number of patients per arm, and distribution of drug-use probabilities. Investigated algorithms include maximum likelihood and Bayesian estimates, self-report alone, UDS alone, and several simple modifications of self-report (referred to here as ELCON algorithms) which eliminate perceived contradictions between it and UDS. Among the algorithms investigated, simple ELCON algorithms gave rise to the most powerful t-tests to detect mean group differences in stimulant drug use. Further investigation is needed to determine if simple, naïve procedures such as the ELCON algorithms are optimal for comparing clinical study treatment arms. But researchers who currently require an automated algorithm in scenarios similar to those simulated for combining TLFB and UDS to test group differences in stimulant use should consider one of the ELCON algorithms. This analysis continues a line of inquiry which could determine how best to measure outpatient stimulant use in clinical trials (NIDA. NIDA Monograph-57: Self-Report Methods of Estimating Drug Abuse: Meeting Current Challenges to Validity. NTIS PB 88248083. Bethesda, MD: National Institutes of Health, 1985; NIDA. NIDA Research Monograph 73: Urine Testing for Drugs of Abuse. NTIS PB 89151971. Bethesda, MD: National Institutes of Health, 1987; NIDA. NIDA Research

  4. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  5. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  6. An Approach of Diagnosis Based On The Hidden Markov Chains Model

    Directory of Open Access Journals (Sweden)

    Karim Bouamrane

    2008-07-01

    Full Text Available Diagnosis is a key element in industrial system maintenance process performance. A diagnosis tool is proposed allowing the maintenance operators capitalizing on the knowledge of their trade and subdividing it for better performance improvement and intervention effectiveness within the maintenance process service. The Tool is based on the Markov Chain Model and more precisely the Hidden Markov Chains (HMC which has the system failures determination advantage, taking into account the causal relations, stochastic context modeling of their dynamics and providing a relevant diagnosis help by their ability of dubious information use. Since the FMEA method is a well adapted artificial intelligence field, the modeling with Markov Chains is carried out with its assistance. Recently, a dynamic programming recursive algorithm, called 'Viterbi Algorithm', is being used in the Hidden Markov Chains field. This algorithm provides as input to the HMC a set of system observed effects and generates at exit the various causes having caused the loss from one or several system functions.

  7. Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

    Science.gov (United States)

    Povitsky, A.

    1998-01-01

    In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

  8. Conformal Interpolating Algorithm Based on Cubic NURBS in Aspheric Ultra-Precision Machining

    International Nuclear Information System (INIS)

    Li, C G; Zhang, Q R; Cao, C G; Zhao, S L

    2006-01-01

    Numeric control machining and on-line compensation for aspheric surface are key techniques in ultra-precision machining. In this paper, conformal cubic NURBS interpolating curve is applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal cubic NURBS interpolation, we compare it with the linear interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by cubic NURBS is higher than by line. The algorithm is benefit to increasing the surface form precision of workpieces in ultra-precision machining

  9. A new and accurate fault location algorithm for combined transmission lines using Adaptive Network-Based Fuzzy Inference System

    Energy Technology Data Exchange (ETDEWEB)

    Sadeh, Javad; Afradi, Hamid [Electrical Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad, P.O. Box: 91775-1111, Mashhad (Iran)

    2009-11-15

    This paper presents a new and accurate algorithm for locating faults in a combined overhead transmission line with underground power cable using Adaptive Network-Based Fuzzy Inference System (ANFIS). The proposed method uses 10 ANFIS networks and consists of 3 stages, including fault type classification, faulty section detection and exact fault location. In the first part, an ANFIS is used to determine the fault type, applying four inputs, i.e., fundamental component of three phase currents and zero sequence current. Another ANFIS network is used to detect the faulty section, whether the fault is on the overhead line or on the underground cable. Other eight ANFIS networks are utilized to pinpoint the faults (two for each fault type). Four inputs, i.e., the dc component of the current, fundamental frequency of the voltage and current and the angle between them, are used to train the neuro-fuzzy inference systems in order to accurately locate the faults on each part of the combined line. The proposed method is evaluated under different fault conditions such as different fault locations, different fault inception angles and different fault resistances. Simulation results confirm that the proposed method can be used as an efficient means for accurate fault location on the combined transmission lines. (author)

  10. Controlling maximum evaluation duration in on-line and on-board evolutionary robotics

    NARCIS (Netherlands)

    Atta-ul-Qayyum, A.; Nedev, D.G.; Haasdijk, E.W.

    2014-01-01

    On-line evolution of robot controllers allows robots to adapt while they perform their proper tasks. In our investigations, robots contain their own self-sufficient evolutionary algorithm (known as the encapsulated approach) where individual solutions are evaluated by means of a time sharing scheme:

  11. Good control practices underlined by an on-line fuzzy control database

    Directory of Open Access Journals (Sweden)

    Alonso, M. V.

    1994-04-01

    Full Text Available In the olive oil trade, control systems that automate extraction processes, cutting production costs and increasing processing capacity without losing quality, are always desirable. The database structure of an on-line fuzzy control of centrifugation systems and the algorithms used to attain the best control conditions are analysed. Good control practices are suggested to obtain virgin olive oil of prime quality.

    In the olive oil trade, control systems that automate extraction processes, cutting production costs and increasing processing capacity without losing quality, are always desirable. The database structure of an on-line fuzzy control of centrifugation systems and the algorithms used to attain the best control conditions are analysed. Good control practices are suggested to obtain virgin olive oil of prime quality.

  12. On-line compression of symmetrical multidimensional γ-ray spectra using adaptive orthogonal transforms

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2008-01-01

    The efficient algorithm to compress multidimensional symmetrical γ-ray events is presented. The reduction of data volume can be achieved due to both the symmetry of the γ-ray spectra and compression capabilities of the employed adaptive orthogonal transform. Illustrative examples prove in the favor of the proposed compression algorithm. The algorithm was implemented for on-line compression of events. Acquired compressed data can be later processed in an interactive way

  13. Script-independent text line segmentation in freestyle handwritten documents.

    Science.gov (United States)

    Li, Yi; Zheng, Yefeng; Doermann, David; Jaeger, Stefan; Li, Yi

    2008-08-01

    Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1], [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [1]-[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise.

  14. An AK-LDMeans algorithm based on image clustering

    Science.gov (United States)

    Chen, Huimin; Li, Xingwei; Zhang, Yongbin; Chen, Nan

    2018-03-01

    Clustering is an effective analytical technique for handling unmarked data for value mining. Its ultimate goal is to mark unclassified data quickly and correctly. We use the roadmap for the current image processing as the experimental background. In this paper, we propose an AK-LDMeans algorithm to automatically lock the K value by designing the Kcost fold line, and then use the long-distance high-density method to select the clustering centers to further replace the traditional initial clustering center selection method, which further improves the efficiency and accuracy of the traditional K-Means Algorithm. And the experimental results are compared with the current clustering algorithm and the results are obtained. The algorithm can provide effective reference value in the fields of image processing, machine vision and data mining.

  15. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  16. GARLIC — A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    International Nuclear Information System (INIS)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-01-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code — GARLIC — is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus. - Highlights: • High resolution infrared-microwave radiative transfer model. • Discussion of algorithmic and computational aspects. • Jacobians by automatic/algorithmic differentiation. • Performance evaluation by intercomparisons, verification, validation

  17. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    Science.gov (United States)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  18. On-line and off-line data analysis for the EUSO-TA experiment

    International Nuclear Information System (INIS)

    Piotrowski, Lech Wiktor; Casolino, Marco; Conti, Livio; Ebisuzaki, Toshikazu; Fornaro, Claudio; Kawasaki, Yoshiya; Hachisu, Yusuke; Ohmori, Hitoshi; De Santis, Cristian; Shinozaki, Kenji; Takizawa, Yoshiyuki; Uehara, Yoshihiro

    2015-01-01

    We show the principles of the communication protocol, on-line calibration, off-line data format as well as basic visualisation and data analysis software implemented for the EUSO-TA on-ground experiment, being the first step towards implementation in a future space based mission. EUSO-TA is an on-ground detector for measuring UV (290–430 nm) light from extensive air showers induced by cosmic rays. It is a prototype experiment for the JEM-EUSO space-borne mission, built according to the same constraints of low mass, low power consumption and thus low computing power. Nevertheless, it needs to process a huge amount of data in short time, taking 2.5μs exposures for 2304 channels. The low processing power and high time resolution require an efficient communication protocol and simple yet powerful algorithms for on-line analysis. The off-line data format is designed for storing a huge amount of data, at the same time allowing easy access, analysis and sharing. Its structure is scalable and adjustable to different experimental designs. It is independent of the data origin, whether it is hardware or a Monte-Carlo simulator. Use of object-oriented techniques and the ROOT framework allows rapid development of dedicated analysis software, such as a Python based quick-view program described herein. Basic capabilities of the software, such as display of the focal surface, light curves and calibration data are shown in this paper

  19. On-line and off-line data analysis for the EUSO-TA experiment

    Energy Technology Data Exchange (ETDEWEB)

    Piotrowski, Lech Wiktor, E-mail: lech.piotrowski@riken.jp [RIKEN, Wako (Japan); Casolino, Marco [RIKEN, Wako (Japan); INFN and Univ. Rome Tor Vergata, Rome (Italy); Conti, Livio [International Telematic University UNINETTUNO, Rome (Italy); Ebisuzaki, Toshikazu [RIKEN, Wako (Japan); Fornaro, Claudio [International Telematic University UNINETTUNO, Rome (Italy); Kawasaki, Yoshiya; Hachisu, Yusuke; Ohmori, Hitoshi [RIKEN, Wako (Japan); De Santis, Cristian [INFN and Univ. Rome Tor Vergata, Rome (Italy); Shinozaki, Kenji [Institute for Astronomy and Astrophysics, Kepler Center, University of Tübingen, Sand 6, D-72076 Tübingen (Germany); RIKEN, Wako (Japan); Takizawa, Yoshiyuki; Uehara, Yoshihiro [RIKEN, Wako (Japan)

    2015-02-11

    We show the principles of the communication protocol, on-line calibration, off-line data format as well as basic visualisation and data analysis software implemented for the EUSO-TA on-ground experiment, being the first step towards implementation in a future space based mission. EUSO-TA is an on-ground detector for measuring UV (290–430 nm) light from extensive air showers induced by cosmic rays. It is a prototype experiment for the JEM-EUSO space-borne mission, built according to the same constraints of low mass, low power consumption and thus low computing power. Nevertheless, it needs to process a huge amount of data in short time, taking 2.5μs exposures for 2304 channels. The low processing power and high time resolution require an efficient communication protocol and simple yet powerful algorithms for on-line analysis. The off-line data format is designed for storing a huge amount of data, at the same time allowing easy access, analysis and sharing. Its structure is scalable and adjustable to different experimental designs. It is independent of the data origin, whether it is hardware or a Monte-Carlo simulator. Use of object-oriented techniques and the ROOT framework allows rapid development of dedicated analysis software, such as a Python based quick-view program described herein. Basic capabilities of the software, such as display of the focal surface, light curves and calibration data are shown in this paper.

  20. Multiparametric amplitude analysis with on-line compression using adaptive orthogonal transform

    Energy Technology Data Exchange (ETDEWEB)

    Morhac, M; Matousek, V; Turzo, I

    1996-12-31

    The new method of multiparameter amplitude analysis with on-line compression is developed. The proposed method decreases the memory needed to store multidimensional histograms. Examples of employing the algorithms for three-dimensional spectra are presented. 5 refs.

  1. A Minimum Path Algorithm Among 3D-Polyhedral Objects

    Science.gov (United States)

    Yeltekin, Aysin

    1989-03-01

    In this work we introduce a minimum path theorem for 3D case. We also develop an algorithm based on the theorem we prove. The algorithm will be implemented on the software package we develop using C language. The theorem we introduce states that; "Given the initial point I, final point F and S be the set of finite number of static obstacles then an optimal path P from I to F, such that PA S = 0 is composed of straight line segments which are perpendicular to the edge segments of the objects." We prove the theorem as well as we develop the following algorithm depending on the theorem to find the minimum path among 3D-polyhedral objects. The algorithm generates the point Qi on edge ei such that at Qi one can find the line which is perpendicular to the edge and the IF line. The algorithm iteratively provides a new set of initial points from Qi and exploits all possible paths. Then the algorithm chooses the minimum path among the possible ones. The flowchart of the program as well as the examination of its numerical properties are included.

  2. Implementation of intensity ratio change and line-of-sight rate change algorithms for imaging infrared trackers

    Science.gov (United States)

    Viau, C. R.

    2012-06-01

    The use of the intensity change and line-of-sight (LOS) change concepts have previously been documented in the open-literature as techniques used by non-imaging infrared (IR) seekers to reject expendable IR countermeasures (IRCM). The purpose of this project was to implement IR counter-countermeasure (IRCCM) algorithms based on target intensity and kinematic behavior for a generic imaging IR (IIR) seeker model with the underlying goal of obtaining a better understanding of how expendable IRCM can be used to defeat the latest generation of seekers. The report describes the Intensity Ratio Change (IRC) and LOS Rate Change (LRC) discrimination techniques. The algorithms and the seeker model are implemented in a physics-based simulation product called Tactical Engagement Simulation Software (TESS™). TESS is developed in the MATLAB®/Simulink® environment and is a suite of RF/IR missile software simulators used to evaluate and analyze the effectiveness of countermeasures against various classes of guided threats. The investigation evaluates the algorithm and tests their robustness by presenting the results of batch simulation runs of surface-to-air (SAM) and air-to-air (AAM) IIR missiles engaging a non-maneuvering target platform equipped with expendable IRCM as self-protection. The report discusses how varying critical parameters such track memory time, ratio thresholds and hold time can influence the outcome of an engagement.

  3. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    Science.gov (United States)

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  4. Optimal Rotor Design of Line Start Permanent Magnet Synchronous Motor by Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Bui Minh Dinh

    2017-07-01

    Full Text Available Line start permanent magnet synchronous motor (LSPMSM is one of the highest efficiency motors due to no rotor copper loss at synchronous speed and self-starting. LSPMSM has torque characteristics of both induction motor IM and Permanent Magnet Synchronous Motor-PMSM. Using Genetic Algorithm (GA for balancing magnetic cost and for copper loss minimization, the magnetic sizes and geometry parameter of stator and rotor are found and manufactured for industrial evaluation. This article is also taking account practical manufacturing factors to minimize mass production cost. In order to maximize efficiency, an optimal design method of cage-bars and magnet shape has to be considered. The geometry parameters of stator and rotor can be obtained by an analytical model method and validated by FEM simulation. This paper presents the optimal rotor design of a three-phase line-start permanent magnet motor (LSPM considering the starting torque and efficiency. To consider nonlinear characteristics, the design process is comprised of the FEM and analytical method. During this study, permanent-magnets and cage bars were designed using the magnetic equivalent circuit method and the barriers that control all magnetic flux were designed using the FEM, and the tradeoff of starting torque and efficiency is controlled by weight function in Taguchi method simulation. Finally, some practical results have been obtained and analyzed based on a LSPMSM test bench.

  5. On-line statistical processing of radiation detector pulse trains with time-varying count rates

    International Nuclear Information System (INIS)

    Apostolopoulos, G.

    2008-01-01

    Statistical analysis is of primary importance for the correct interpretation of nuclear measurements, due to the inherent random nature of radioactive decay processes. This paper discusses the application of statistical signal processing techniques to the random pulse trains generated by radiation detectors. The aims of the presented algorithms are: (i) continuous, on-line estimation of the underlying time-varying count rate θ(t) and its first-order derivative dθ/dt; (ii) detection of abrupt changes in both of these quantities and estimation of their new value after the change point. Maximum-likelihood techniques, based on the Poisson probability distribution, are employed for the on-line estimation of θ and dθ/dt. Detection of abrupt changes is achieved on the basis of the generalized likelihood ratio statistical test. The properties of the proposed algorithms are evaluated by extensive simulations and possible applications for on-line radiation monitoring are discussed

  6. An image overall complexity evaluation method based on LSD line detection

    Science.gov (United States)

    Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo

    2017-04-01

    In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.

  7. Molecular diagnosis of Legionella infections--Clinical utility of front-line screening as part of a pneumonia diagnostic algorithm.

    Science.gov (United States)

    Gadsby, Naomi J; Helgason, Kristjan O; Dickson, Elizabeth M; Mills, Jonathan M; Lindsay, Diane S J; Edwards, Giles F; Hanson, Mary F; Templeton, Kate E

    2016-02-01

    Urinary antigen testing for Legionella pneumophila serogroup 1 is the leading rapid diagnostic test for Legionnaires' Disease (LD); however other Legionella species and serogroups can also cause LD. The aim was to determine the utility of front-line L. pneumophila and Legionella species PCR in a severe respiratory infection algorithm. L. pneumophila and Legionella species duplex real-time PCR was carried out on 1944 specimens from hospitalised patients over a 4 year period in Edinburgh, UK. L. pneumophila was detected by PCR in 49 (2.7%) specimens from 36 patients. During a LD outbreak, combined L. pneumophila respiratory PCR and urinary antigen testing had optimal sensitivity and specificity (92.6% and 98.3% respectively) for the detection of confirmed cases. Legionella species was detected by PCR in 16 (0.9%) specimens from 10 patients. The 5 confirmed and 1 probable cases of Legionella longbeachae LD were both PCR and antibody positive. Front-line L. pneumophila and Legionella species PCR is a valuable addition to urinary antigen testing as part of a well-defined algorithm. Cases of LD due to L. longbeachae might be considered laboratory-confirmed when there is a positive Legionella species PCR result and detection of L. longbeachae specific antibody response. Copyright © 2015 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

  8. On Gamma Ray Instrument On-Board Data Processing Real-Time Computational Algorithm for Cosmic Ray Rejection

    Science.gov (United States)

    Kizhner, Semion; Hunter, Stanley D.; Hanu, Andrei R.; Sheets, Teresa B.

    2016-01-01

    Richard O. Duda and Peter E. Hart of Stanford Research Institute in [1] described the recurring problem in computer image processing as the detection of straight lines in digitized images. The problem is to detect the presence of groups of collinear or almost collinear figure points. It is clear that the problem can be solved to any desired degree of accuracy by testing the lines formed by all pairs of points. However, the computation required for n=NxM points image is approximately proportional to n2 or O(n2), becoming prohibitive for large images or when data processing cadence time is in milliseconds. Rosenfeld in [2] described an ingenious method due to Hough [3] for replacing the original problem of finding collinear points by a mathematically equivalent problem of finding concurrent lines. This method involves transforming each of the figure points into a straight line in a parameter space. Hough chose to use the familiar slope-intercept parameters, and thus his parameter space was the two-dimensional slope-intercept plane. A parallel Hough transform running on multi-core processors was elaborated in [4]. There are many other proposed methods of solving a similar problem, such as sampling-up-the-ramp algorithm (SUTR) [5] and algorithms involving artificial swarm intelligence techniques [6]. However, all state-of-the-art algorithms lack in real time performance. Namely, they are slow for large images that require performance cadence of a few dozens of milliseconds (50ms). This problem arises in spaceflight applications such as near real-time analysis of gamma ray measurements contaminated by overwhelming amount of traces of cosmic rays (CR). Future spaceflight instruments such as the Advanced Energetic Pair Telescope instrument (AdEPT) [7-9] for cosmos gamma ray survey employ large detector readout planes registering multitudes of cosmic ray interference events and sparse science gamma ray event traces' projections. The AdEPT science of interest is in the

  9. Optimal OFDMA Downlink Scheduling Under a Control Signaling Cost Constraint

    OpenAIRE

    Larsson, Erik G.

    2010-01-01

    This paper proposes a new algorithm for downlink scheduling in OFDMA systems. The method maximizes the throughput, taking into account the amount of signaling needed to transmit scheduling maps to the users. A combinatorial problem is formulated and solved via a dynamic programming approach reminiscent of the Viterbi algorithm. The total computational complexity of the algorithm is upper boundedby O(K^4N) where K is the number of users that are being considered for scheduling in a frame and N...

  10. On-line implant reconstruction in HDR brachytherapy

    International Nuclear Information System (INIS)

    Kolkman-Deurloo, Inger-Karine K.; Kruijf, Wilhelmus J.M. de; Levendag, Peter C.

    2006-01-01

    Background and purpose: To evaluate the accuracy of on-line planning in an Integrated Brachytherapy Unit (IBU) using dedicated image distortion correction algorithms, correcting the geometric distortion and magnetic distortion separately, and to determine the effect of the reconstruction accuracy on clinical treatment plans in terms of deviations in treatment time and dose. Patients and methods: The reconstruction accuracy has been measured using 20 markers, positioned at well known locations in a QA phantom. Treatment plans of two phantoms representing clinical implant geometries, have been compared with reference plans to determine the effect of the reconstruction accuracy on the treatment plan. Before clinical introduction, treatment plans of three representative patients, based on on-line reconstruction, have been compared with reference plans. Results: The average reconstruction error for 10 in. images reduces from -0.6 mm (range -2.6 to +1.0 mm) to -0.2 mm (range -1.2 to +0.6 mm) after image distortion correction and for 15 in. images from 0.8 mm (range -0.5 to +3.0 mm) to 0.0 mm (range -0.8 to +0.8 mm). The error in case of eccentric positioning of the phantom, i.e. 0.8 mm (range -1.0 to +3.3 mm), reduces to 0.1 mm (range -0.5 to +0.9 mm). Correction of the image distortions reduces the deviation in the calculated treatment time of maximally 2.7% to less than 0.8% in case of eccentrically positioned clinical phantoms. The deviation in the treatment time or reference dose in the plans based on on-line reconstruction with image distortion correction of the three patient examples is smaller than 0.3%. Conclusions: Accurate on-line implant reconstruction using the IBU localiser and dedicated correction algorithms separating the geometric distortion and the magnetic distortion is possible. The results fulfill the minimum requirements as imposed by the Netherlands Commission on Radiation Dosimetry (NCS) without limitations regarding the usable range of the field

  11. Line-based monocular graph SLAM algorithm%基于图优化的单目线特征SLAM算法

    Institute of Scientific and Technical Information of China (English)

    董蕊芳; 柳长安; 杨国田; 程瑞营

    2017-01-01

    A new line based 6-DOF monocular algorithm for using graph simultaneous localization and mapping(SLAM) algoritm was proposed.First,the straight line were applied as a feature instead of points,due to a map consisting of a sparse set of 3D points is unable to describe the structure of the surrounding world.Secondly,most of previous line-based SLAM algorithms were focused on filtering-based solutions suffering from the inconsistent when applied to the inherently non-linear SLAM problem,in contrast,the graph-based solution was used to improve the accuracy of the localization and the consistency of mapping.Thirdly,a special line representation was exploited for combining the Plücker coordinates with the Cayley representation.The Plücker coordinates were used for the 3D line projection function,and the Cayley representation helps to update the line parameters during the non-linear optimization process.Finally,the simulation experiment shows that the proposed algorithm outperforms odometry and EKF-based SLAM in terms of the pose estimation,while the sum of the squared errors (SSE) and root-mean-square error (RMSE) of proposed method are 2.5% and 10.5% of odometry,and 22.4% and 33% of EKF-based SLAM.The reprojection error is only 45.5 pixels.The real image experiment shows that the proposed algorithm obtains only 958 cm2 and 3.941 3 cm the SSE and RMSE of pose estimation.Therefore,it can be concluded that the proposed algorithm is effective and accuracy.%提出了基于图优化的单目线特征同时定位和地图构建(SLAM)的方法.首先,针对主流视觉SLAM算法因采用点作为特征而导致构建的点云地图稀疏、难以准确表达环境结构信息等缺点,采用直线作为特征来构建地图.然后,根据现有线特征的SLAM算法都是基于滤波器的SLAM框架、存在线性化及更新效率的问题,采用基于图优化的SLAM解决方案以提高定位精度及地图构建的一致性和准确性.将线特征

  12. Line Width Recovery after Vectorization of Engineering Drawings

    Directory of Open Access Journals (Sweden)

    Gramblička Matúš

    2016-12-01

    Full Text Available Vectorization is the conversion process of a raster image representation into a vector representation. The contemporary commercial vectorization software applications do not provide sufficiently high quality outputs for such images as do mechanical engineering drawings. Line width preservation is one of the problems. There are applications which need to know the line width after vectorization because this line attribute carries the important semantic information for the next 3D model generation. This article describes the algorithm that is able to recover line width of individual lines in the vectorized engineering drawings. Two approaches are proposed, one examines the line width at three points, whereas the second uses a variable number of points depending on the line length. The algorithm is tested on real mechanical engineering drawings.

  13. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  14. Resource Leveling Based on Backward Controlling Activity in Line of Balance

    Directory of Open Access Journals (Sweden)

    Lihui Zhang

    2017-01-01

    Full Text Available The line of balance method that provides continuous and uninterrupted use of resources is one of the best methods for repetitive project resource management. This paper develops a resource leveling algorithm based on the backward controlling activity in line of balance. The backward controlling activity is a kind of special activity, and if its duration is prolonged the project duration could be reduced. It brings two advantages to the resource leveling: both the resource allocated on the backward activity and the project duration are reduced. A resource leveling algorithm is presented which permits the number of crews of the backward controlling activity to be reduced until the terminal situation is reached, where the backward controlling activity does not exist or the number of crews cannot be reduced. That adjustment enables the productivity of all activities to be consistent. An illustrative pipeline project demonstrates the improvement in resource leveling. And this study designed a MATLAB program to execute the design algorithm. The proposed model could help practitioners to achieve the goals of both resource leveling and project duration reduction without increasing any resource.

  15. The Impact of a Line Probe Assay Based Diagnostic Algorithm on Time to Treatment Initiation and Treatment Outcomes for Multidrug Resistant TB Patients in Arkhangelsk Region, Russia.

    Science.gov (United States)

    Eliseev, Platon; Balantcev, Grigory; Nikishova, Elena; Gaida, Anastasia; Bogdanova, Elena; Enarson, Donald; Ornstein, Tara; Detjen, Anne; Dacombe, Russell; Gospodarevskaya, Elena; Phillips, Patrick P J; Mann, Gillian; Squire, Stephen Bertel; Mariandyshev, Andrei

    2016-01-01

    In the Arkhangelsk region of Northern Russia, multidrug-resistant (MDR) tuberculosis (TB) rates in new cases are amongst the highest in the world. In 2014, MDR-TB rates reached 31.7% among new cases and 56.9% among retreatment cases. The development of new diagnostic tools allows for faster detection of both TB and MDR-TB and should lead to reduced transmission by earlier initiation of anti-TB therapy. The PROVE-IT (Policy Relevant Outcomes from Validating Evidence on Impact) Russia study aimed to assess the impact of the implementation of line probe assay (LPA) as part of an LPA-based diagnostic algorithm for patients with presumptive MDR-TB focusing on time to treatment initiation with time from first-care seeking visit to the initiation of MDR-TB treatment rather than diagnostic accuracy as the primary outcome, and to assess treatment outcomes. We hypothesized that the implementation of LPA would result in faster time to treatment initiation and better treatment outcomes. A culture-based diagnostic algorithm used prior to LPA implementation was compared to an LPA-based algorithm that replaced BacTAlert and Löwenstein Jensen (LJ) for drug sensitivity testing. A total of 295 MDR-TB patients were included in the study, 163 diagnosed with the culture-based algorithm, 132 with the LPA-based algorithm. Among smear positive patients, the implementation of the LPA-based algorithm was associated with a median decrease in time to MDR-TB treatment initiation of 50 and 66 days compared to the culture-based algorithm (BacTAlert and LJ respectively, ptime to MDR-TB treatment initiation of 78 days when compared to the culture-based algorithm (LJ, ptime to MDR diagnosis and earlier treatment initiation as well as better treatment outcomes for patients with MDR-TB. These findings also highlight the need for further improvements within the health system to reduce both patient and diagnostic delays to truly optimize the impact of new, rapid diagnostics.

  16. Algorithmic foundation of multi-scale spatial representation

    CERN Document Server

    Li, Zhilin

    2006-01-01

    With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...

  17. Active load sharing technique for on-line efficiency optimization in DC microgrids

    DEFF Research Database (Denmark)

    Sanseverino, E. Riva; Zizzo, G.; Boscaino, V.

    2017-01-01

    Recently, DC power distribution is gaining more and more importance over its AC counterpart achieving increased efficiency, greater flexibility, reduced volumes and capital cost. In this paper, a 24-120-325V two-level DC distribution system for home appliances, each including three parallel DC......-DC converters, is modeled. An active load sharing technique is proposed for the on-line optimization of the global efficiency of the DC distribution network. The algorithm aims at the instantaneous efficiency optimization of the whole DC network, based on the on-line load current sampling. A Look Up Table......, is created to store the real efficiencies of the converters taking into account components tolerances. A MATLAB/Simulink model of the DC distribution network has been set up and a Genetic Algorithm has been employed for the global efficiency optimization. Simulation results are shown to validate the proposed...

  18. Utilization of genetic algorithm in on-line tuning of fluid power servos

    Energy Technology Data Exchange (ETDEWEB)

    Halme, J.

    1997-12-31

    This study describes a robust and plausible method based on genetic algorithms suitable for tuning a regulator. The main advantages of the method presented is its robustness and easy-to-use feature. In this thesis the method is demonstrated by searching for appropriate control parameters of a state-feedback controller in a fluid power environment. To corroborate the robustness of the tuning method, two earlier studies are also presented in the appendix, where the presented tuning method is used in different kinds of regulator tuning situations. (orig.) 33 refs.

  19. Utilization of genetic algorithm in on-line tuning of fluid power servos

    Energy Technology Data Exchange (ETDEWEB)

    Halme, J

    1998-12-31

    This study describes a robust and plausible method based on genetic algorithms suitable for tuning a regulator. The main advantages of the method presented is its robustness and easy-to-use feature. In this thesis the method is demonstrated by searching for appropriate control parameters of a state-feedback controller in a fluid power environment. To corroborate the robustness of the tuning method, two earlier studies are also presented in the appendix, where the presented tuning method is used in different kinds of regulator tuning situations. (orig.) 33 refs.

  20. A new method of on-line multiparameter amplitude analysis with compression

    International Nuclear Information System (INIS)

    Morhac, M.; matousek, V.

    1996-01-01

    An algorithm of one-line multidimensional amplitude analysis with compression using fast adaptive orthogonal transform is presented in the paper. The method is based on a direct modification of multiplication coefficients of the signal flow graph of the fast Cooley-Tukey's algorithm. The coefficients are modified according to a reference vector representing the processed data. The method has been tested to compress three parameter experimental nuclear data. The efficiency of the derived adaptive transform is compared with classical orthogonal transforms. (orig.)

  1. Comparative study between ultrahigh spatial frequency algorithm and high spatial frequency algorithm in high-resolution CT of the lungs

    International Nuclear Information System (INIS)

    Oh, Yu Whan; Kim, Jung Kyuk; Suh, Won Hyuck

    1994-01-01

    To date, the high spatial frequency algorithm (HSFA) which reduces image smoothing and increases spatial resolution has been used for the evaluation of parenchymal lung diseases in thin-section high-resolution CT. In this study, we compared the ultrahigh spatial frequency algorithm (UHSFA) with the high spatial frequency algorithm in the assessment of thin section images of the lung parenchyma. Three radiologists compared the UHSFA and HSFA on identical CT images in a line-pair resolution phantom, one lung specimen, 2 patients with normal lung and 18 patients with abnormal lung parenchyma. Scanning of a line-pair resolution phantom demonstrated no difference in resolution between two techniques but it showed that outer lines of the line pairs with maximal resolution looked thicker on UHSFA than those on HSFA. Lung parenchymal detail with UHSFA was judged equal or superior to HSFA in 95% of images. Lung parenchymal sharpness was improved with UHSFA in all images. Although UHSFA resulted in an increase in visible noise, observers did not found that image noise interfered with image interpretation. The visual CT attenuation of normal lung parenchyma is minimally increased in images with HSFA. The overall visual preference of the images reconstructed on UHSFA was considered equal to or greater than that of those reconstructed on HSFA in 78% of images. The ultrahigh spatial frequency algorithm improved the overall visual quality of the images in pulmonary parenchymal high-resolution CT

  2. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  3. DNA base-calling from a nanopore using a Viterbi algorithm.

    Science.gov (United States)

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-05-16

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (~98%), even with a poor signal/noise ratio. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  4. Free vibration analysis of straight-line beam regarded as distributed system by combining Wittrick-Williams algorithm and transfer dynamic stiffness coefficient method

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Myung Soo; Yang, Kyong Uk [Chonnam National University, Yeosu (Korea, Republic of); Kondou, Takahiro [Kyushu University, Fukuoka (Japan); Bonkobara, Yasuhiro [University of Miyazaki, Miyazaki (Japan)

    2016-03-15

    We developed a method for analyzing the free vibration of a structure regarded as a distributed system, by combining the Wittrick-Williams algorithm and the transfer dynamic stiffness coefficient method. A computational algorithm was formulated for analyzing the free vibration of a straight-line beam regarded as a distributed system, to explain the concept of the developed method. To verify the effectiveness of the developed method, the natural frequencies of straight-line beams were computed using the finite element method, transfer matrix method, transfer dynamic stiffness coefficient method, the exact solution, and the developed method. By comparing the computational results of the developed method with those of the other methods, we confirmed that the developed method exhibited superior performance over the other methods in terms of computational accuracy, cost and user convenience.

  5. Agent-based station for on-line diagnostics by self-adaptive laser Doppler vibrometry

    Science.gov (United States)

    Serafini, S.; Paone, N.; Castellini, P.

    2013-12-01

    A self-adaptive diagnostic system based on laser vibrometry is proposed for quality control of mechanical defects by vibration testing; it is developed for appliances at the end of an assembly line, but its characteristics are generally suited for testing most types of electromechanical products. It consists of a laser Doppler vibrometer, equipped with scanning mirrors and a camera, which implements self-adaptive bahaviour for optimizing the measurement. The system is conceived as a Quality Control Agent (QCA) and it is part of a Multi Agent System that supervises all the production line. The QCA behaviour is defined so to minimize measurement uncertainty during the on-line tests and to compensate target mis-positioning under guidance of a vision system. Best measurement conditions are reached by maximizing the amplitude of the optical Doppler beat signal (signal quality) and consequently minimize uncertainty. In this paper, the optimization strategy for measurement enhancement achieved by the down-hill algorithm (Nelder-Mead algorithm) and its effect on signal quality improvement is discussed. Tests on a washing machine in controlled operating conditions allow to evaluate the efficacy of the method; significant reduction of noise on vibration velocity spectra is observed. Results from on-line tests are presented, which demonstrate the potential of the system for industrial quality control.

  6. Agent-based station for on-line diagnostics by self-adaptive laser Doppler vibrometry.

    Science.gov (United States)

    Serafini, S; Paone, N; Castellini, P

    2013-12-01

    A self-adaptive diagnostic system based on laser vibrometry is proposed for quality control of mechanical defects by vibration testing; it is developed for appliances at the end of an assembly line, but its characteristics are generally suited for testing most types of electromechanical products. It consists of a laser Doppler vibrometer, equipped with scanning mirrors and a camera, which implements self-adaptive bahaviour for optimizing the measurement. The system is conceived as a Quality Control Agent (QCA) and it is part of a Multi Agent System that supervises all the production line. The QCA behaviour is defined so to minimize measurement uncertainty during the on-line tests and to compensate target mis-positioning under guidance of a vision system. Best measurement conditions are reached by maximizing the amplitude of the optical Doppler beat signal (signal quality) and consequently minimize uncertainty. In this paper, the optimization strategy for measurement enhancement achieved by the down-hill algorithm (Nelder-Mead algorithm) and its effect on signal quality improvement is discussed. Tests on a washing machine in controlled operating conditions allow to evaluate the efficacy of the method; significant reduction of noise on vibration velocity spectra is observed. Results from on-line tests are presented, which demonstrate the potential of the system for industrial quality control.

  7. Surface quality monitoring for process control by on-line vibration analysis using an adaptive spline wavelet algorithm

    Science.gov (United States)

    Luo, G. Y.; Osypiw, D.; Irle, M.

    2003-05-01

    The dynamic behaviour of wood machining processes affects the surface finish quality of machined workpieces. In order to meet the requirements of increased production efficiency and improved product quality, surface quality information is needed for enhanced process control. However, current methods using high price devices or sophisticated designs, may not be suitable for industrial real-time application. This paper presents a novel approach of surface quality evaluation by on-line vibration analysis using an adaptive spline wavelet algorithm, which is based on the excellent time-frequency localization of B-spline wavelets. A series of experiments have been performed to extract the feature, which is the correlation between the relevant frequency band(s) of vibration with the change of the amplitude and the surface quality. The graphs of the experimental results demonstrate that the change of the amplitude in the selective frequency bands with variable resolution (linear and non-linear) reflects the quality of surface finish, and the root sum square of wavelet power spectrum is a good indication of surface quality. Thus, surface quality can be estimated and quantified at an average level in real time. The results can be used to regulate and optimize the machine's feed speed, maintaining a constant spindle motor speed during cutting. This will lead to higher level control and machining rates while keeping dimensional integrity and surface finish within specification.

  8. APPLICATION OF A PRIMAL-DUAL INTERIOR POINT ALGORITHM USING EXACT SECOND ORDER INFORMATION WITH A NOVEL NON-MONOTONE LINE SEARCH METHOD TO GENERALLY CONSTRAINED MINIMAX OPTIMISATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    INTAN S. AHMAD

    2008-04-01

    Full Text Available This work presents the application of a primal-dual interior point method to minimax optimisation problems. The algorithm differs significantly from previous approaches as it involves a novel non-monotone line search procedure, which is based on the use of standard penalty methods as the merit function used for line search. The crucial novel concept is the discretisation of the penalty parameter used over a finite range of orders of magnitude and the provision of a memory list for each such order. An implementation within a logarithmic barrier algorithm for bounds handling is presented with capabilities for large scale application. Case studies presented demonstrate the capabilities of the proposed methodology, which relies on the reformulation of minimax models into standard nonlinear optimisation models. Some previously reported case studies from the open literature have been solved, and with significantly better optimal solutions identified. We believe that the nature of the non-monotone line search scheme allows the search procedure to escape from local minima, hence the encouraging results obtained.

  9. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  10. Expeditious 3D poisson vlasov algorithm applied to ion extraction from a plasma

    International Nuclear Information System (INIS)

    Whealton, J.H.; McGaffey, R.W.; Meszaros, P.S.

    1983-01-01

    A new 3D Poisson Vlasov algorithm is under development which differs from a previous algorithm, referenced in this paper, in two respects: the mesh lines are cartesian, and the Poisson equation is solved iteratively. The resulting algorithm has been used to examine the same boundary value problem as considered in the earlier algorithm except that the number of nodes is 2 times greater. The same physical results were obtained except the computational time was reduced by a factor of 60 and the memory requirement was reduced by a factor of 10. This algorithm at present restricts Neumann boundary conditions to orthogonal planes lying along mesh lines. No such restriction applies to Dirichlet boundaries. An emittance diagram is shown below where those points lying on the y = 0 line start on the axis of symmetry and those near the y = 1 line start near the slot end

  11. Mitigating energy loss on distribution lines through the allocation of reactors

    Science.gov (United States)

    Miranda, T. M.; Romero, F.; Meffe, A.; Castilho Neto, J.; Abe, L. F. T.; Corradi, F. E.

    2018-03-01

    This paper presents a methodology for automatic reactors allocation on medium voltage distribution lines to reduce energy loss. In Brazil, some feeders are distinguished by their long lengths and very low load, which results in a high influence of the capacitance of the line on the circuit’s performance, requiring compensation through the installation of reactors. The automatic allocation is accomplished using an optimization meta-heuristic called Global Neighbourhood Algorithm. Given a set of reactor models and a circuit, it outputs an optimal solution in terms of reduction of energy loss. The algorithm is also able to verify if the voltage limits determined by the user are not being violated, besides checking for energy quality. The methodology was implemented in a software tool, which can also show the allocation graphically. A simulation with four real feeders is presented in the paper. The obtained results were able to reduce the energy loss significantly, from 50.56%, in the worst case, to 93.10%, in the best case.

  12. Extracting potential bus lines of Customized City Bus Service based on public transport big data

    Science.gov (United States)

    Ren, Yibin; Chen, Ge; Han, Yong; Zheng, Huangcheng

    2016-11-01

    Customized City Bus Service (CCBS) can reduce the traffic congestion and environmental pollution that caused by the increasing in private cars, effectively. This study aims to extract the potential bus lines and each line's passenger density of CCBS by mining the public transport big data. The datasets used in this study are mainly Smart Card Data (SCD) and bus GPS data of Qingdao, China, from October 11th and November 7th 2015. Firstly, we compute the temporal-origin-destination (TOD) of passengers by mining SCD and bus GPS data. Compared with the traditional OD, TOD not only has the spatial location, but also contains the trip's boarding time. Secondly, based on the traditional DBSCAN algorithm, we put forwards an algorithm, named TOD-DBSCAN, combined with the spatial-temporal features of TOD.TOD-DBSCAN is used to cluster the TOD trajectories in peak hours of all working days. Then, we define two variables P and N to describe the possibility and passenger destiny of a potential CCBS line. P is the probability of the CCBS line. And N represents the potential passenger destiny of the line. Lastly, we visualize the potential CCBS lines extracted by our procedure on the map and analyse relationship between potential CCBS lines and the urban spatial structure.

  13. Implementation techniques and acceleration of DBPF reconstruction algorithm based on GPGPU for helical cone beam CT

    International Nuclear Information System (INIS)

    Shen Le; Xing Yuxiang

    2010-01-01

    The derivative back-projection filtered algorithm for a helical cone-beam CT is a newly developed exact reconstruction method. Due to its large computational complexity, the reconstruction is rather slow for practical use. General purpose graphic processing unit (GPGPU) is an SIMD paralleled hardware architecture with powerful float-point operation capacity. In this paper,we propose a new method for PI-line choice and sampling grid, and a paralleled PI-line reconstruction algorithm implemented on NVIDIA's Compute Unified Device Architecture (CUDA). Numerical simulation studies are carried out to validate our method. Compared with conventional CPU implementation, the CUDA accelerated method provides images of the same quality with a speedup factor of 318. Optimization strategies for the GPU acceleration are presented. Finally, influence of the parameters of the PI-line samples on the reconstruction speed and image quality is discussed. (authors)

  14. On-line plant-wide monitoring using neural networks

    International Nuclear Information System (INIS)

    Turkcan, E.; Ciftcioglu, O.; Eryurek, E.; Upadhyaya, B.R.

    1992-06-01

    The on-line signal analysis system designed for a multi-level mode operation using neural networks is described. The system is capable of monitoring the plant states by tracking different number of signals up to 32 simultaneously. The data used for this study were acquired from the Borssele Nuclear Power Plant (PWR type), and using the on-line monitoring system. An on-line plant-wide monitoring study using a multilayer neural network model is discussed in this paper. The back-propagation neural network algorithm is used for training the network. The technique assumes that each physical state of the power plant can be represented by a unique pattern of instrument readings which can be related to the condition of the plant. When disturbance occurs, the sensor readings undergo a transient, and form a different set of patterns which represent the new operational status. Diagnosing these patterns can be helpful in identifying this new state of the power plant. To this end, plant-wide monitoring with neutral networks is one of the new techniques in real-time applications. (author). 9 refs.; 5 figs

  15. Research on Energy-Saving Operation Strategy for Multiple Trains on the Urban Subway Line

    Directory of Open Access Journals (Sweden)

    Jianqiang Liu

    2017-12-01

    Full Text Available Energy consumption for multiple trains on the urban subway line is predominantly affected by the operation strategy. This paper proposed an energy-saving operation strategy for multiple trains, which is suitable for various line conditions and complex train schedules. The model and operation modes of the strategy are illustrated in detail, aiming to take full use of regenerative braking energy in complex multi-train systems with different departure intervals and dwell times. The computing method is proposed by means of the heuristic algorithm to obtain the optimum operation curve for each train. The simulation result based on a real urban subway line is provided to verify the correctness and effectiveness of the proposed energy-saving operation strategy.

  16. The impact of signal normalization on seizure detection using line length features.

    Science.gov (United States)

    Logesparan, Lojini; Rodriguez-Villegas, Esther; Casson, Alexander J

    2015-10-01

    Accurate automated seizure detection remains a desirable but elusive target for many neural monitoring systems. While much attention has been given to the different feature extractions that can be used to highlight seizure activity in the EEG, very little formal attention has been given to the normalization that these features are routinely paired with. This normalization is essential in patient-independent algorithms to correct for broad-level differences in the EEG amplitude between people, and in patient-dependent algorithms to correct for amplitude variations over time. It is crucial, however, that the normalization used does not have a detrimental effect on the seizure detection process. This paper presents the first formal investigation into the impact of signal normalization techniques on seizure discrimination performance when using the line length feature to emphasize seizure activity. Comparing five normalization methods, based upon the mean, median, standard deviation, signal peak and signal range, we demonstrate differences in seizure detection accuracy (assessed as the area under a sensitivity-specificity ROC curve) of up to 52 %. This is despite the same analysis feature being used in all cases. Further, changes in performance of up to 22 % are present depending on whether the normalization is applied to the raw EEG itself or directly to the line length feature. Our results highlight the median decaying memory as the best current approach for providing normalization when using line length features, and they quantify the under-appreciated challenge of providing signal normalization that does not impair seizure detection algorithm performance.

  17. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  18. A quick survey of text categorization algorithms

    Directory of Open Access Journals (Sweden)

    Dan MUNTEANU

    2007-12-01

    Full Text Available This paper contains an overview of basic formulations and approaches to text classification. This paper surveys the algorithms used in text categorization: handcrafted rules, decision trees, decision rules, on-line learning, linear classifier, Rocchio’s algorithm, k Nearest Neighbor (kNN, Support Vector Machines (SVM.

  19. A stepwise algorithm using an at-a-glance first-line test for the non-invasive diagnosis of advanced liver fibrosis and cirrhosis.

    Science.gov (United States)

    Boursier, Jérôme; de Ledinghen, Victor; Leroy, Vincent; Anty, Rodolphe; Francque, Sven; Salmon, Dominique; Lannes, Adrien; Bertrais, Sandrine; Oberti, Frederic; Fouchard-Hubert, Isabelle; Calès, Paul

    2017-06-01

    Chronic liver diseases (CLD) are common, and are therefore mainly managed by non-hepatologists. These physicians lack access to the best non-invasive tests of liver fibrosis, and consequently cannot accurately determine the disease severity. Referral to a hepatologist is then needed. We aimed to implement an algorithm, comprising a new first-line test usable by all physicians, for the detection of advanced liver fibrosis in all CLD patients. Diagnostic study: 3754 CLD patients with liver biopsy were 2:1 randomized into derivation and validation sets. Prognostic study: longitudinal follow-up of 1275 CLD patients with baseline fibrosis tests. Diagnostic study: the easy liver fibrosis test (eLIFT), an "at-a-glance" sum of points attributed to age, gender, gamma-glutamyl transferase, aspartate aminotransferase (AST), platelets and prothrombin time, was developed for the diagnosis of advanced fibrosis. In the validation set, eLIFT and fibrosis-4 (FIB4) had the same sensitivity (78.0% vs. 76.6%, p=0.470) but eLIFT gave fewer false positive results, especially in patients ≥60years old (53.8% vs. 82.0%, ptest. FibroMeter with vibration controlled transient elastography (VCTE) was the most accurate among the eight fibrosis tests evaluated. The sensitivity of the eLIFT-FM VCTE algorithm (first-line eLIFT, second-line FibroMeter VCTE ) was 76.1% for advanced fibrosis and 92.1% for cirrhosis. Prognostic study: patients diagnosed as having "no/mild fibrosis" by the algorithm had excellent liver-related prognosis with thus no need for referral to a hepatologist. The eLIFT-FM VCTE algorithm extends the detection of advanced liver fibrosis to all CLD patients and reduces unnecessary referrals of patients without significant CLD to hepatologists. Blood fibrosis tests and transient elastography accurately diagnose advanced liver fibrosis in the large population of patients having chronic liver disease, but these non-invasive tests are only currently available in specialized

  20. Labelling subway lines

    NARCIS (Netherlands)

    Garrido, M.A.; Iturriaga, C.; Márquez, A.; Portillo, J.R.; Reyes, P.; Wolff, A.; Eades, P.; Takaoka, T.

    2001-01-01

    Graphical features on map, charts, diagrams and graph drawings usually must be annotated with text labels in order to convey their meaning. In this paper we focus on a problem that arises when labeling schematized maps, e.g. for subway networks. We present algorithms for labeling points on a line

  1. A contrario line segment detection

    CERN Document Server

    von Gioi, Rafael Grompone

    2014-01-01

    The reliable detection of low-level image structures is an old and still challenging problem in computer vision. This?book leads a detailed tour through the LSD algorithm, a line segment detector designed to be fully automatic. Based on the a contrario framework, the algorithm works efficiently without the need of any parameter tuning. The design criteria are thoroughly explained and the algorithm's good and bad results are illustrated on real and synthetic images. The issues involved, as well as the strategies used, are common to many geometrical structure detection problems and some possible

  2. Optimization of Aero Engine Acceleration Control in Combat State Based on Genetic Algorithms

    Science.gov (United States)

    Li, Jie; Fan, Ding; Sreeram, Victor

    2012-03-01

    In order to drastically exploit the potential of the aero engine and improve acceleration performance in the combat state, an on-line optimized controller based on genetic algorithms is designed for an aero engine. For testing the validity of the presented control method, detailed joint simulation tests of the designed controller and the aero engine model are performed in the whole flight envelope. Simulation test results show that the presented control algorithm has characteristics of rapid convergence speed, high efficiency and can fully exploit the acceleration performance potential of the aero engine. Compared with the former controller, the designed on-line optimized controller (DOOC) can improve the security of the acceleration process and greatly enhance the aero engine thrust in the whole range of the flight envelope, the thrust increases an average of 8.1% in the randomly selected working states. The plane which adopts DOOC can acquire better fighting advantage in the combat state.

  3. Binary GCD like Algorithms for Some Complex Quadratic Rings

    DEFF Research Database (Denmark)

    Agarwal, Saurabh; Frandsen, Gudmund Skovbjerg

    2004-01-01

    On the lines of the binary gcd algorithm for rational integers, algorithms for computing the gcd are presented for the ring of integers in where . Thus a binary gcd like algorithm is presented for a unique factorization domain which is not Euclidean (case d=-19). Together with the earlier known b...

  4. A modified three-term PRP conjugate gradient algorithm for optimization models.

    Science.gov (United States)

    Wu, Yanlin

    2017-01-01

    The nonlinear conjugate gradient (CG) algorithm is a very effective method for optimization, especially for large-scale problems, because of its low memory requirement and simplicity. Zhang et al. (IMA J. Numer. Anal. 26:629-649, 2006) firstly propose a three-term CG algorithm based on the well known Polak-Ribière-Polyak (PRP) formula for unconstrained optimization, where their method has the sufficient descent property without any line search technique. They proved the global convergence of the Armijo line search but this fails for the Wolfe line search technique. Inspired by their method, we will make a further study and give a modified three-term PRP CG algorithm. The presented method possesses the following features: (1) The sufficient descent property also holds without any line search technique; (2) the trust region property of the search direction is automatically satisfied; (3) the steplengh is bounded from below; (4) the global convergence will be established under the Wolfe line search. Numerical results show that the new algorithm is more effective than that of the normal method.

  5. On the Latent Variable Interpretation in Sum-Product Networks.

    Science.gov (United States)

    Peharz, Robert; Gens, Robert; Pernkopf, Franz; Domingos, Pedro

    2017-10-01

    One of the central themes in Sum-Product networks (SPNs) is the interpretation of sum nodes as marginalized latent variables (LVs). This interpretation yields an increased syntactic or semantic structure, allows the application of the EM algorithm and to efficiently perform MPE inference. In literature, the LV interpretation was justified by explicitly introducing the indicator variables corresponding to the LVs' states. However, as pointed out in this paper, this approach is in conflict with the completeness condition in SPNs and does not fully specify the probabilistic model. We propose a remedy for this problem by modifying the original approach for introducing the LVs, which we call SPN augmentation. We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks. Based on these results, we find a sound derivation of the EM algorithm for SPNs. Furthermore, the Viterbi-style algorithm for MPE proposed in literature was never proven to be correct. We show that this is indeed a correct algorithm, when applied to selective SPNs, and in particular when applied to augmented SPNs. Our theoretical results are confirmed in experiments on synthetic data and 103 real-world datasets.

  6. Inversion algorithms for the spherical Radon and cosine transform

    International Nuclear Information System (INIS)

    Louis, A K; Riplinger, M; Spiess, M; Spodarev, E

    2011-01-01

    We consider two integral transforms which are frequently used in integral geometry and related fields, namely the spherical Radon and cosine transform. Fast algorithms are developed which invert the respective transforms in a numerically stable way. So far, only theoretical inversion formulae or algorithms for atomic measures have been derived, which are not so important for applications. We focus on two- and three-dimensional cases, where we also show that our method leads to a regularization. Numerical results are presented and show the validity of the resulting algorithms. First, we use synthetic data for the inversion of the Radon transform. Then we apply the algorithm for the inversion of the cosine transform to reconstruct the directional distribution of line processes from finitely many intersections of their lines with test lines (2D) or planes (3D), respectively. Finally we apply our method to analyse a series of microscopic two- and three-dimensional images of a fibre system

  7. Exploratory Analysis of an On-line Evolutionary Algorithm in Simulated Robots

    NARCIS (Netherlands)

    Haasdijk, E.W.; Smit, S.K.; Eiben, A.E.

    2012-01-01

    In traditional evolutionary robotics, robot controllers are evolved in a separate design phase preceding actual deployment; we call this off-line evolution. Alternatively, robot controllers can evolve while the robots perform their proper tasks, during the actual operational phase; we call this

  8. The psychopharmacology algorithm project at the Harvard South Shore Program: an algorithm for acute mania.

    Science.gov (United States)

    Mohammad, Othman; Osser, David N

    2014-01-01

    This new algorithm for the pharmacotherapy of acute mania was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. The authors conducted a literature search in PubMed and reviewed key studies, other algorithms and guidelines, and their references. Treatments were prioritized considering three main considerations: (1) effectiveness in treating the current episode, (2) preventing potential relapses to depression, and (3) minimizing side effects over the short and long term. The algorithm presupposes that clinicians have made an accurate diagnosis, decided how to manage contributing medical causes (including substance misuse), discontinued antidepressants, and considered the patient's childbearing potential. We propose different algorithms for mixed and nonmixed mania. Patients with mixed mania may be treated first with a second-generation antipsychotic, of which the first choice is quetiapine because of its greater efficacy for depressive symptoms and episodes in bipolar disorder. Valproate and then either lithium or carbamazepine may be added. For nonmixed mania, lithium is the first-line recommendation. A second-generation antipsychotic can be added. Again, quetiapine is favored, but if quetiapine is unacceptable, risperidone is the next choice. Olanzapine is not considered a first-line treatment due to its long-term side effects, but it could be second-line. If the patient, whether mixed or nonmixed, is still refractory to the above medications, then depending on what has already been tried, consider carbamazepine, haloperidol, olanzapine, risperidone, and valproate first tier; aripiprazole, asenapine, and ziprasidone second tier; and clozapine third tier (because of its weaker evidence base and greater side effects). Electroconvulsive therapy may be considered at any point in the algorithm if the patient has a history of positive response or is intolerant of medications.

  9. Parameterized Algorithms for Survivable Network Design with Uniform Demands

    DEFF Research Database (Denmark)

    Bang-Jensen, Jørgen; Klinkby Knudsen, Kristine Vitting; Saurabh, Saket

    2018-01-01

    problem in combinatorial optimization that captures numerous well-studied problems in graph theory and graph algorithms. Consequently, there is a long line of research into exact-polynomial time algorithms as well as approximation algorithms for various restrictions of this problem. An important...... that SNDP is W[1]-hard for both arc and vertex connectivity versions on digraphs. The core of our algorithms is composed of new combinatorial results on connectivity in digraphs and undirected graphs....

  10. On-line fouling monitor for heat exchangers

    International Nuclear Information System (INIS)

    Tsou, J.L.

    1995-01-01

    Biological and/or chemical fouling in utility service water system heat exchangers adversely affects operation and maintenance costs, and reduced heat transfer capability can force a power deaerating or even a plant shut down. In addition, service water heat exchanger performance is a safety issue for nuclear power plants, and the issue was highlighted by NRC in Generic Letter 89-13. Heat transfer losses due to fouling are difficult to measure and, usually, quantitative assessment of the impact of fouling is impossible. Plant operators typically measure inlet and outlet water temperatures and flow rates and then perform complex calculations for heat exchanger fouling resistance or ''cleanliness''. These direct estimates are often imprecise due to inadequate instrumentation. Electric Power Research Institute developed and patented an on-line condenser fouling monitor. This monitor may be installed in any location within the condenser; does not interfere with routine plant operations, including on-line mechanical and chemical treatment methods; and provides continuous, real-time readings of the heat transfer efficiency of the instrumented tube. This instrument can be modified to perform on-line monitoring of service water heat exchangers. This paper discusses the design, construction of the new monitor, and algorithm used to calculate service water heat exchanger fouling

  11. Stability and chaos of LMSER PCA learning algorithm

    International Nuclear Information System (INIS)

    Lv Jiancheng; Y, Zhang

    2007-01-01

    LMSER PCA algorithm is a principal components analysis algorithm. It is used to extract principal components on-line from input data. The algorithm has both stability and chaotic dynamic behavior under some conditions. This paper studies the local stability of the LMSER PCA algorithm via a corresponding deterministic discrete time system. Conditions for local stability are derived. The paper also explores the chaotic behavior of this algorithm. It shows that the LMSER PCA algorithm can produce chaos. Waveform plots, Lyapunov exponents and bifurcation diagrams are presented to illustrate the existence of chaotic behavior of this algorithm

  12. Dehazed Image Quality Assessment by Haze-Line Theory

    Science.gov (United States)

    Song, Yingchao; Luo, Haibo; Lu, Rongrong; Ma, Junkai

    2017-06-01

    Images captured in bad weather suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. However, there is a lack of evaluation metrics to assess the performance of these algorithms or rate them. In this paper, an indicator of contrast enhancement is proposed basing on the newly proposed haze-line theory. The theory assumes that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. The presence of haze makes each color cluster forms a line, which is named haze-line. By using these haze-lines, we assess performance of dehazing algorithms designed to enhance the contrast by measuring the inter-cluster deviations between different colors of dehazed image. Experimental results demonstrated that the proposed Color Contrast (CC) index correlates well with human judgments of image contrast taken in a subjective test on various scene of dehazed images and performs better than state-of-the-art metrics.

  13. Intelligent PID controller based on ant system algorithm and fuzzy inference and its application to bionic artificial leg

    Institute of Scientific and Technical Information of China (English)

    谭冠政; 曾庆冬; 李文斌

    2004-01-01

    A designing method of intelligent proportional-integral-derivative(PID) controllers was proposed based on the ant system algorithm and fuzzy inference. This kind of controller is called Fuzzy-ant system PID controller. It consists of an off-line part and an on-line part. In the off-line part, for a given control system with a PID controller,by taking the overshoot, setting time and steady-state error of the system unit step response as the performance indexes and by using the ant system algorithm, a group of optimal PID parameters K*p , Ti* and T*d can be obtained, which are used as the initial values for the on-line tuning of PID parameters. In the on-line part, based on Kp* , Ti*and Td* and according to the current system error e and its time derivative, a specific program is written, which is used to optimize and adjust the PID parameters on-line through a fuzzy inference mechanism to ensure that the system response has optimal transient and steady-state performance. This kind of intelligent PID controller can be used to control the motor of the intelligent bionic artificial leg designed by the authors. The result of computer simulation experiment shows that the controller has less overshoot and shorter setting time.

  14. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  15. Accurate fault location algorithm on power transmission lines with use of two-end unsynchronized measurements

    Directory of Open Access Journals (Sweden)

    Mohamed Dine

    2012-01-01

    Full Text Available This paper presents a new approach to fault location on power transmission lines. This approach uses two-end unsynchronised measurements of the line and benefits from the advantages of digital technology and numerical relaying, which are available today and can easily be applied for off-line analysis. The approach is to modify the apparent impedance method using a very simple first-order formula. The new method is independent of fault resistance, source impedances and pre-fault currents. In addition, the data volume communicated between relays is sufficiently small enough to be transmitted easily using a digital protection channel. The proposed approach is tested via digital simulation using MATLand the applied test results corroborate the superior performance of the proposed approach.

  16. Generalized emittance measurements in a beam transport line

    International Nuclear Information System (INIS)

    Skelly, J.; Gardner, C.; Luccio, A.; Kponou, A.; Reece, K.

    1991-01-01

    Motivated by the need to commission 3 beam transport lines for the new AGS Booster project, we have developed a generalized emittance-measurement program; beam line specifics are entirely resident in data tables, not in program code. For instrumentation, the program requires one or more multi-wire profile monitors; one or multiple profiles are acquired from each monitor, corresponding to one or multiple tunes of the transport line. Emittances and Twiss parameters are calculated using generalized algorithms. The required matix descriptions of the beam optics are constructed by an on-line general beam modeling program. Design of the program, its algorithms, and initial experience with it will be described. 4 refs., 2 figs., 1 tab

  17. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    Science.gov (United States)

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  18. Metaheuristic approaches to order sequencing on a unidirectional picking line

    Directory of Open Access Journals (Sweden)

    AP de Villiers

    2013-06-01

    Full Text Available In this paper the sequencing of orders on a unidirectional picking line is considered. The aim of the order sequencing is to minimise the number of cycles travelled by a picker within the picking line to complete all orders. A tabu search, simulated annealing, genetic algorithm, generalised extremal optimisation and a random local search are presented as possible solution approaches. Computational results based on real life data instances are presented for these metaheuristics and compared to the performance of a lower bound and the solutions used in practise. The random local search exhibits the best overall solution quality, however, the generalised extremal optimisation approach delivers comparable results in considerably shorter computational times.

  19. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  20. A modular multi-microcomputer system for on-line vibration diagnostics

    International Nuclear Information System (INIS)

    Saedtler, E.

    1988-01-01

    A new modular multi-microprocessor system for on-line vibration monitoring and diagnostics of PWRs is described. The aim of the system is to make feasible an early detection of increasing failures in relevant regions of a reactor plant, to verify the mechanical integrity of the investigated components, and to improve therefore the operational safety of the plant. After a discussion of the implemented surveillance methods and algorithms, which are based on hierarchical structured identification (estimation) and statistical pattern recognition tools, the system architecture (software and hardware) is portrayed. The classification scheme itself works sequential so that samples (or features) can arrive on-line. This on-line classification is important in order to take necessary actions in time. Furthermore, the system has learning capabilities, which means it is adaptable to different, varying states and plant conditions. The main features of the system are presented and its contribution to an automation of complex surveillance and monitoring tasks is shown. (author)

  1. Multiuser detection and channel estimation: Exact and approximate methods

    DEFF Research Database (Denmark)

    Fabricius, Thomas

    2003-01-01

    subtractive interference cancellation with hyperbolic tangent tentative decision device, in statistical mechanics and machine learning called the naive mean field approach. The differences between the proposed algorithms lie in how the bias is estimated/approximated. We propose approaches based on a second...... propose here to use accurate approximations borrowed from statistical mechanics and machine learning. These give us various algorithms that all can be formulated in a subtractive interference cancellation formalism. The suggested algorithms can e ectively be seen as bias corrections to standard...... of the Junction Tree Algorithm, which is a generalisation of Pearl's Belief Propagation, the BCJR, sum product, min/max sum, and Viterbi's algorithm. Although efficient algoithms, they have an inherent exponential complexity in the number of users when applied to CDMA multiuser detection. For this reason we...

  2. Minimization over randomly selected lines

    Directory of Open Access Journals (Sweden)

    Ismet Sahin

    2013-07-01

    Full Text Available This paper presents a population-based evolutionary optimization method for minimizing a given cost function. The mutation operator of this method selects randomly oriented lines in the cost function domain, constructs quadratic functions interpolating the cost function at three different points over each line, and uses extrema of the quadratics as mutated points. The crossover operator modifies each mutated point based on components of two points in population, instead of one point as is usually performed in other evolutionary algorithms. The stopping criterion of this method depends on the number of almost degenerate quadratics. We demonstrate that the proposed method with these mutation and crossover operations achieves faster and more robust convergence than the well-known Differential Evolution and Particle Swarm algorithms.

  3. A new extraction method of loess shoulder-line based on Marr-Hildreth operator and terrain mask.

    Directory of Open Access Journals (Sweden)

    Sheng Jiang

    Full Text Available Loess shoulder-lines are significant structural lines which divide the complicated loess landform into loess interfluves and gully-slope lands. Existing extraction algorithms for shoulder-lines mainly are based on local maximum of terrain features. These algorithms are sensitive to noise for complicated loess surface and the extraction parameters are difficult to be determined, making the extraction results usually inaccurate. This paper presents a new extraction approach for loess shoulder-lines, in which Marr-Hildreth edge operator is employed to construct initial shoulder-lines. Then the terrain mask for confining the boundary of shoulder-lines is proposed based on slope degree classification and morphology methods, avoiding interference from non-valley area and modify the initial loess shoulder-lines. A case study is conducted in Yijun located in the northern Shanxi Loess Plateau of China. The Digital Elevation Models with a grid size of 5 m is applied as original data. To obtain optimal scale parameters, the Euclidean Distance Offset Percentages between shoulder-lines is calculated by the Marr-Hildreth operator and the manual delineations. The experimental results show that the new method could achieve the highest extraction accuracy when σ = 5 in Gaussian smoothing. According to the accuracy assessment, the average extraction accuracy is about 88.5%, which indicates that the proposed method is applicable for the extraction of loess shoulder-lines in the loess hilly and gully areas.

  4. A multi-objective approach to the assignment of stock keeping units to unidirectional picking lines

    Directory of Open Access Journals (Sweden)

    Le Roux, G. J.

    2017-05-01

    Full Text Available An order picking system in a distribution centre consisting of parallel unidirectional picking lines is considered. The objectives are to minimise the walking distance of the pickers, the largest volume of stock on a picking line over all picking lines, the number of small packages, and the total penalty incurred for late distributions. The problem is formulated as a multi-objective multiple knapsack problem that is not solvable in a realistic time. Population-based algorithms, including the artificial bee colony algorithm and the genetic algorithm, are also implemented. The results obtained from all algorithms indicate a substantial improvement on all objectives relative to historical assignments. The genetic algorithm delivers the best performance.

  5. On an edge partition and root graphs of some classes of line graphs

    Directory of Open Access Journals (Sweden)

    K Pravas

    2017-04-01

    Full Text Available The Gallai and the anti-Gallai graphs of a graph $G$ are complementary pairs of spanning subgraphs of the line graph of $G$. In this paper we find some structural relations between these graph classes by finding a partition of the edge set of the line graph of a graph $G$ into the edge sets of the Gallai and anti-Gallai graphs of $G$. Based on this, an optimal algorithm to find the root graph of a line graph is obtained. Moreover, root graphs of diameter-maximal, distance-hereditary, Ptolemaic and chordal graphs are also discussed.

  6. On increasing the spectral efficiency and transmissivity in the data transmission channel on the spacecraft-ground tracking station line

    Science.gov (United States)

    Andrianov, M. N.; Kostenko, V. I.; Likhachev, S. F.

    2018-01-01

    The algorithms for achieving a practical increase in the rate of data transmission on the space-craft-ground tracking station line has been considered. This increase is achieved by applying spectral-effective modulation techniques, the technology of orthogonal frequency compression of signals using millimeterrange radio waves. The advantages and disadvantages of each of three algorithms have been revealed. A significant advantage of data transmission in the millimeter range has been indicated.

  7. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    Science.gov (United States)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  8. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1 βk ≥ 0 2 the search direction has the trust region property without the use of any line search method 3 the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  9. Catheter Calibration Using Template Matching Line Interpolation Algorithm

    National Research Council Canada - National Science Library

    Nagy, L

    2001-01-01

    ..., such as: image resolution, type of the calibration, algorithm used for contour detection, size of the FOV, other parameters of the image The studied calibration method is the one using catheter size...

  10. Executable Pseudocode for Graph Algorithms

    NARCIS (Netherlands)

    B. Ó Nualláin (Breanndán)

    2015-01-01

    textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the

  11. GARLIC - A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    Science.gov (United States)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-04-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code - GARLIC - is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus.

  12. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    Science.gov (United States)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  13. Designing a Method for AN Automatic Earthquake Intensities Calculation System Based on Data Mining and On-Line Polls

    Science.gov (United States)

    Liendo Sanchez, A. K.; Rojas, R.

    2013-05-01

    Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps

  14. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  15. Research of grasping algorithm based on scara industrial robot

    Science.gov (United States)

    Peng, Tao; Zuo, Ping; Yang, Hai

    2018-04-01

    As the tobacco industry grows, facing the challenge of the international tobacco giant, efficient logistics service is one of the key factors. How to complete the tobacco sorting task of efficient economy is the goal of tobacco sorting and optimization research. Now the cigarette distribution system uses a single line to carry out the single brand sorting task, this article adopts a single line to realize the cigarette sorting task of different brands. Using scara robot special algorithm for sorting and packaging, the optimization scheme significantly enhances the indicators of smoke sorting system. Saving labor productivity, obviously improve production efficiency.

  16. On Line Segment Length and Mapping 4-regular Grid Structures in Network Infrastructures

    DEFF Research Database (Denmark)

    Riaz, Muhammad Tahir; Nielsen, Rasmus Hjorth; Pedersen, Jens Myrup

    2006-01-01

    The paper focuses on mapping the road network into 4-regular grid structures. A mapping algorithm is proposed. To model the road network GIS data have been used. The Geographic Information System (GIS) data for the road network are composed with different size of line segment lengths...

  17. Proceedings of the 6th Computer Science On-line Conference 2017

    CERN Document Server

    Senkerik, Roman; Oplatkova, Zuzana; Prokopova, Zdenka; Silhavy, Petr

    2017-01-01

    This book presents new methods and approaches to real-world problems as well as exploratory research that describes novel artificial intelligence applications, including deep learning, neural networks and hybrid algorithms. This book constitutes the refereed proceedings of the Artificial Intelligence Trends in Intelligent Systems Section of the 6th Computer Science On-line Conference 2017 (CSOC 2017), held in April 2017. .

  18. Implementation of the LandTrendr Algorithm on Google Earth Engine

    Directory of Open Access Journals (Sweden)

    Robert E Kennedy

    2018-05-01

    Full Text Available The LandTrendr (LT algorithm has been used widely for analysis of change in Landsat spectral time series data, but requires significant pre-processing, data management, and computational resources, and is only accessible to the community in a proprietary programming language (IDL. Here, we introduce LT for the Google Earth Engine (GEE platform. The GEE platform simplifies pre-processing steps, allowing focus on the translation of the core temporal segmentation algorithm. Temporal segmentation involved a series of repeated random access calls to each pixel’s time series, resulting in a set of breakpoints (“vertices” that bound straight-line segments. The translation of the algorithm into GEE included both transliteration and code analysis, resulting in improvement and logic error fixes. At six study areas representing diverse land cover types across the U.S., we conducted a direct comparison of the new LT-GEE code against the heritage code (LT-IDL. The algorithms agreed in most cases, and where disagreements occurred, they were largely attributable to logic error fixes in the code translation process. The practical impact of these changes is minimal, as shown by an example of forest disturbance mapping. We conclude that the LT-GEE algorithm represents a faithful translation of the LT code into a platform easily accessible by the broader user community.

  19. Asymmetric Bimodal Exponential Power Distribution on the Real Line

    Directory of Open Access Journals (Sweden)

    Mehmet Niyazi Çankaya

    2018-01-01

    Full Text Available The asymmetric bimodal exponential power (ABEP distribution is an extension of the generalized gamma distribution to the real line via adding two parameters that fit the shape of peakedness in bimodality on the real line. The special values of peakedness parameters of the distribution are a combination of half Laplace and half normal distributions on the real line. The distribution has two parameters fitting the height of bimodality, so capacity of bimodality is enhanced by using these parameters. Adding a skewness parameter is considered to model asymmetry in data. The location-scale form of this distribution is proposed. The Fisher information matrix of these parameters in ABEP is obtained explicitly. Properties of ABEP are examined. Real data examples are given to illustrate the modelling capacity of ABEP. The replicated artificial data from maximum likelihood estimates of parameters of ABEP and other distributions having an algorithm for artificial data generation procedure are provided to test the similarity with real data. A brief simulation study is presented.

  20. A congestion line flow control in deregulated power system

    Directory of Open Access Journals (Sweden)

    Venkatarajan Shanmuga Sundaram

    2011-01-01

    Full Text Available Under open access, market-driven transactions have become the new independent decision variables defining the behavior of the power system. The possibility of transmission lines getting over-loaded is relatively more under deregulated operation because different parts of the system are owned by separate companies and in part operated under varying service charges. This paper discusses a two-tier algorithm for correcting the lone overloads in conjunction with the conventional power-flow methods. The method uses line flow sensitivities, which are computed by the East Decoupled Power-flow algorithm and can be adapted for on-line implementation.

  1. Evaluation of three coding schemes designed for improved data communication

    Science.gov (United States)

    Snelsire, R. W.

    1974-01-01

    Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.

  2. Engineering local optimality in quantum Monte Carlo algorithms

    Science.gov (United States)

    Pollet, Lode; Van Houcke, Kris; Rombouts, Stefan M. A.

    2007-08-01

    Quantum Monte Carlo algorithms based on a world-line representation such as the worm algorithm and the directed loop algorithm are among the most powerful numerical techniques for the simulation of non-frustrated spin models and of bosonic models. Both algorithms work in the grand-canonical ensemble and can have a winding number larger than zero. However, they retain a lot of intrinsic degrees of freedom which can be used to optimize the algorithm. We let us guide by the rigorous statements on the globally optimal form of Markov chain Monte Carlo simulations in order to devise a locally optimal formulation of the worm algorithm while incorporating ideas from the directed loop algorithm. We provide numerical examples for the soft-core Bose-Hubbard model and various spin- S models.

  3. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  4. Overlapping communities detection based on spectral analysis of line graphs

    Science.gov (United States)

    Gui, Chun; Zhang, Ruisheng; Hu, Rongjing; Huang, Guoming; Wei, Jiaxuan

    2018-05-01

    Community in networks are often overlapping where one vertex belongs to several clusters. Meanwhile, many networks show hierarchical structure such that community is recursively grouped into hierarchical organization. In order to obtain overlapping communities from a global hierarchy of vertices, a new algorithm (named SAoLG) is proposed to build the hierarchical organization along with detecting the overlap of community structure. SAoLG applies the spectral analysis into line graphs to unify the overlap and hierarchical structure of the communities. In order to avoid the limitation of absolute distance such as Euclidean distance, SAoLG employs Angular distance to compute the similarity between vertices. Furthermore, we make a micro-improvement partition density to evaluate the quality of community structure and use it to obtain the more reasonable and sensible community numbers. The proposed SAoLG algorithm achieves a balance between overlap and hierarchy by applying spectral analysis to edge community detection. The experimental results on one standard network and six real-world networks show that the SAoLG algorithm achieves higher modularity and reasonable community number values than those generated by Ahn's algorithm, the classical CPM and GN ones.

  5. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  6. Design and implementation of channel estimation for low-voltage power line communication systems based on OFDM

    International Nuclear Information System (INIS)

    Zhao Huidong; Hei Yong; Qiao Shushan; Ye Tianchun

    2012-01-01

    An optimized channel estimation algorithm based on a time-spread structure in OFDM low-voltage power line communication (PLC) systems is proposed to achieve a lower bit error rate (BER). This paper optimizes the best maximum multi-path delay of the linear minimum mean square error (LMMSE) algorithm in time-domain spread OFDM systems. Simulation results indicate that the BER of the improved method is lower than that of conventional LMMSE algorithm, especially when the signal-to-noise ratio (SNR) is lower than 0 dB. Both the LMMSE algorithm and the proposed algorithm are implemented and fabricated in CSMC 0.18 μm technology. This paper analyzes and compares the hardware complexity and performance of the two algorithms. Measurements indicate that the proposed channel estimator has better performance than the conventional estimator.

  7. Enhancement of tracking performance in electro-optical system based on servo control algorithm

    Science.gov (United States)

    Choi, WooJin; Kim, SungSu; Jung, DaeYoon; Seo, HyoungKyu

    2017-10-01

    Modern electro-optical surveillance and reconnaissance systems require tracking capability to get exact images of target or to accurately direct the line of sight to target which is moving or still. This leads to the tracking system composed of image based tracking algorithm and servo control algorithm. In this study, we focus on the servo control function to minimize the overshoot in the tracking motion and do not miss the target. The scheme is to limit acceleration and velocity parameters in the tracking controller, depending on the target state information in the image. We implement the proposed techniques by creating a system model of DIRCM and simulate the same environment, validate the performance on the actual equipment.

  8. A quasi-Newton algorithm for large-scale nonlinear equations

    Directory of Open Access Journals (Sweden)

    Linghua Huang

    2017-02-01

    Full Text Available Abstract In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i a conjugate gradient (CG algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm’s initial point does not have any restrictions; (ii a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length α k $\\alpha_{k}$ . The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the 1 + q $1+q$ -order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

  9. Logarithmic solution to the line-polygon intersection problem. 127

    International Nuclear Information System (INIS)

    Siddon, R.L.; Barth, N.H.

    1987-01-01

    Algorithmic solution for a special case of the line - polygon intersection problem has been developed. The special case involves repeated solution to the problem where one point on the line is held fixed and the other allowed to vary. In addition, the fixed point on the line must lie outside the rectangle defined by the extrema of the polygon and varying point. In radiotherapy applications, the fixed point corresponds to the source of radiation, whereas the varying points refer to the grid of multiple calculation points. For smooth contours of 100-200 vertices, it is found that the new algorithm results in a CPU savings of approximately a factor of 3-5. 3 refs.; 4 figs

  10. A simulation-based approach for solving assembly line balancing problem

    Science.gov (United States)

    Wu, Xiaoyu

    2017-09-01

    Assembly line balancing problem is directly related to the production efficiency, since the last century, the problem of assembly line balancing was discussed and still a lot of people are studying on this topic. In this paper, the problem of assembly line is studied by establishing the mathematical model and simulation. Firstly, the model of determing the smallest production beat under certain work station number is anysized. Based on this model, the exponential smoothing approach is applied to improve the the algorithm efficiency. After the above basic work, the gas stirling engine assembly line balancing problem is discussed as a case study. Both two algorithms are implemented using the Lingo programming environment and the simulation results demonstrate the validity of the new methods.

  11. A reconstruction algorithm for coherent scatter computed tomography based on filtered back-projection

    International Nuclear Information System (INIS)

    Stevendaal, U. van; Schlomka, J.-P.; Harding, A.; Grass, M.

    2003-01-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter form factor of the investigated object. Reconstruction from coherently scattered x-rays is commonly done using algebraic reconstruction techniques (ART). In this paper, we propose an alternative approach based on filtered back-projection. For the first time, a three-dimensional (3D) filtered back-projection technique using curved 3D back-projection lines is applied to two-dimensional coherent scatter projection data. The proposed algorithm is tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. While yielding comparable image quality as ART reconstruction, the modified 3D filtered back-projection algorithm is about two orders of magnitude faster. In contrast to iterative reconstruction schemes, it has the advantage that subfield-of-view reconstruction becomes feasible. This allows a selective reconstruction of the coherent-scatter form factor for a region of interest. The proposed modified 3D filtered back-projection algorithm is a powerful reconstruction technique to be implemented in a CSCT scanning system. This method gives coherent scatter CT the potential of becoming a competitive modality for medical imaging or nondestructive testing

  12. On-line optimal control improves gas processing

    International Nuclear Information System (INIS)

    Berkowitz, P.N.; Papadopoulos, M.N.

    1992-01-01

    This paper reports that the authors' companies jointly funded the first phase of a gas processing liquids optimization project that has the specific purposes to: Improve the return of processing natural gas liquids, Develop sets of control algorithms, Make available a low-cost solution suitable for small to medium-sized gas processing plants, Test and demonstrate the feasibility of line control. The ARCO Willard CO 2 gas recovery processing plant was chosen as the initial test site to demonstrate the application of multivariable on-line optimal control. One objective of this project is to support an R ampersand D effort to provide a standardized solution to the various types of gas processing plants in the U.S. Processes involved in these gas plants include cryogenic separations, demethanization, lean oil absorption, fractionation and gas treating. Next, the proposed solutions had to be simple yet comprehensive enough to allow an operator to maintain product specifications while operating over a wide range of gas input flow and composition. This had to be a supervisors system that remained on-line more than 95% of the time, and achieved reduced plant operating variability and improved variable cost control. It took more than a year to study various gas processes and to develop a control approach before a real application was finally exercised. An initial process for C 2 and CO 2 recoveries was chosen

  13. Deconvolution of 2D coincident Doppler broadening spectroscopy using the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Zhang, J.D.; Zhou, T.J.; Cheung, C.K.; Beling, C.D.; Fung, S.; Ng, M.K.

    2006-01-01

    Coincident Doppler Broadening Spectroscopy (CDBS) measurements are popular in positron solid-state studies of materials. By utilizing the instrumental resolution function obtained from a gamma line close in energy to the 511 keV annihilation line, it is possible to significantly enhance the quality of the CDBS spectra using deconvolution algorithms. In this paper, we compare two algorithms, namely the Non-Negativity Least Squares (NNLS) regularized method and the Richardson-Lucy (RL) algorithm. The latter, which is based on the method of maximum likelihood, is found to give superior results to the regularized least-squares algorithm and with significantly less computer processing time

  14. Evaluation of the wavelet image two-line coder

    DEFF Research Database (Denmark)

    Rein, Stephan Alexander; Fitzek, Frank Hanns Paul; Gühmann, Clemens

    2015-01-01

    This paper introduces the wavelet image two-line (Wi2l) coding algorithm for low complexity compression of images. The algorithm recursively encodes an image backwards reading only two lines of a wavelet subband, which are read in blocks of 512 bytes from flash memory. It thus only requires very ...

  15. Under-reported data analysis with INAR-hidden Markov chains.

    Science.gov (United States)

    Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David

    2016-11-20

    In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Underwater tracking of a moving dipole source using an artificial lateral line: algorithm and experimental validation with ionic polymer–metal composite flow sensors

    International Nuclear Information System (INIS)

    Abdulsadda, Ahmad T; Tan, Xiaobo

    2013-01-01

    Motivated by the lateral line system of fish, arrays of flow sensors have been proposed as a new sensing modality for underwater robots. Existing studies on such artificial lateral lines (ALLs) have been mostly focused on the localization of a fixed underwater vibrating sphere (dipole source). In this paper we examine the problem of tracking a moving dipole source using an ALL system. Based on an analytical model for the moving dipole-generated flow field, we formulate a nonlinear estimation problem that aims to minimize the error between the measured and model-predicted magnitudes of flow velocities at the sensor sites, which is subsequently solved with the Gauss–Newton scheme. A sliding discrete Fourier transform (SDFT) algorithm is proposed to efficiently compute the evolving signal magnitudes based on the flow velocity measurements. Simulation indicates that it is adequate and more computationally efficient to use only the signal magnitudes corresponding to the dipole vibration frequency. Finally, experiments conducted with an artificial lateral line consisting of six ionic polymer–metal composite (IPMC) flow sensors demonstrate that the proposed scheme is able to simultaneously locate the moving dipole and estimate its vibration amplitude and traveling speed with small errors. (paper)

  17. A novel symbiotic organisms search algorithm for congestion management in deregulated environment

    Science.gov (United States)

    Verma, Sumit; Saha, Subhodip; Mukherjee, V.

    2017-01-01

    In today's competitive electricity market, managing transmission congestion in deregulated power system has created challenges for independent system operators to operate the transmission lines reliably within the limits. This paper proposes a new meta-heuristic algorithm, called as symbiotic organisms search (SOS) algorithm, for congestion management (CM) problem in pool based electricity market by real power rescheduling of generators. Inspired by interactions among organisms in ecosystem, SOS algorithm is a recent population based algorithm which does not require any algorithm specific control parameters unlike other algorithms. Various security constraints such as load bus voltage and line loading are taken into account while dealing with the CM problem. In this paper, the proposed SOS algorithm is applied on modified IEEE 30- and 57-bus test power system for the solution of CM problem. The results, thus, obtained are compared to those reported in the recent state-of-the-art literature. The efficacy of the proposed SOS algorithm for obtaining the higher quality solution is also established.

  18. A new on-line leakage current monitoring system of ZnO surge arresters

    International Nuclear Information System (INIS)

    Lee, Bok-Hee; Kang, Sung-Man

    2005-01-01

    This paper presents a new on-line leakage current monitoring system of zinc oxide (ZnO) surge arresters. To effectively diagnose the deterioration of ZnO surge arresters, a new algorithm and on-line leakage current detection device, which uses the time-delay addition method, for discriminating the resistive and capacitive currents was developed to use in the aging test and durability evaluation for ZnO arrester blocks. A computer-based measurement system of the resistive leakage current, the on-line monitoring device can detect accurately the leakage currents flowing through ZnO surge arresters for power frequency ac applied voltages. The proposed on-line leakage current monitoring device of ZnO surge arresters is more highly sensitive and gives more linear response than the existing devices using the detection method of the third harmonic leakage currents. Therefore, the proposed leakage current monitoring device can be useful for predicting the defects and performance deterioration of ZnO surge arresters in power system applications

  19. Interactive animation of fault-tolerant parallel algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Apgar, S.W.

    1992-02-01

    Animation of algorithms makes understanding them intuitively easier. This paper describes the software tool Raft (Robust Animator of Fault Tolerant Algorithms). The Raft system allows the user to animate a number of parallel algorithms which achieve fault tolerant execution. In particular, we use it to illustrate the key Write-All problem. It has an extensive user-interface which allows a choice of the number of processors, the number of elements in the Write-All array, and the adversary to control the processor failures. The novelty of the system is that the interface allows the user to create new on-line adversaries as the algorithm executes.

  20. On-line identification of hybrid systems using an adaptive growing and pruning RBF neural network

    DEFF Research Database (Denmark)

    Alizadeh, Tohid

    2008-01-01

    This paper introduces an adaptive growing and pruning radial basis function (GAP-RBF) neural network for on-line identification of hybrid systems. The main idea is to identify a global nonlinear model that can predict the continuous outputs of hybrid systems. In the proposed approach, GAP......-RBF neural network uses a modified unscented kalman filter (UKF) with forgetting factor scheme as the required on-line learning algorithm. The effectiveness of the resulting identification approach is tested and evaluated on a simulated benchmark hybrid system....

  1. A novel orthoimage mosaic method using the weighted A* algorithm for UAV imagery

    Science.gov (United States)

    Zheng, Maoteng; Zhou, Shunping; Xiong, Xiaodong; Zhu, Junfeng

    2017-12-01

    A weighted A* algorithm is proposed to select optimal seam-lines in orthoimage mosaic for UAV (Unmanned Aircraft Vehicle) imagery. The whole workflow includes four steps: the initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is then detected based on DSM (Digital Surface Model) data; the vertices (conjunction nodes) of initial network are relocated since some of them are on the high objects (buildings, trees and other artificial structures); and, the initial seam-lines are finally refined using the weighted A* algorithm based on the edge diagram and the relocated vertices. The method was tested with two real UAV datasets. Preliminary results show that the proposed method produces acceptable mosaic images in both the urban and mountainous areas, and is better than the result of the state-of-the-art methods on the datasets.

  2. Animated construction of line drawings

    KAUST Repository

    Fu, Hongbo; Zhou, Shizhe; Liu, Ligang; Mitra, Niloy J.

    2011-01-01

    system produces plausible animated constructions of input line drawings, with no or little user intervention. We test our algorithm on a range of input sketches, with varying degree of complexity and structure, and evaluate the results via a user study

  3. A joint recovery scheme for carrier frequency offset and carrier phase noise using extended Kalman filter

    Science.gov (United States)

    Li, Linqian; Feng, Yiqiao; Zhang, Wenbo; Cui, Nan; Xu, Hengying; Tang, Xianfeng; Xi, Lixia; Zhang, Xiaoguang

    2017-07-01

    A joint carrier recovery scheme for polarization division multiplexing (PDM) coherent optical transmission system is proposed and demonstrated, in which the extended Kalman filter (EKF) is exploited to estimate and equalize the carrier frequency offset (CFO) and carrier phase noise (CPN) simultaneously. The proposed method is implemented and verified in the PDM-QPSK system and the PDM-16QAM system with the comparisons to conventional improved Mth-power (IMP) algorithm for CFO estimation, blind phase search (BPS) algorithm or Viterbi-Viterbi (V-V) algorithm for CPN recovery. It is demonstrated that the proposed scheme shows high CFO estimation accuracy, with absolute mean estimation error below 1.5 MHz. Meanwhile, the proposed method has the CFO tolerance of [±3 GHz] for PDM-QPSK system and [±0.9 GHz] for PDM-16QAM system. Compare with IMP/BPS and IMP/V-V, the proposed scheme can enhance the linewidth symbol duration product from 3 × 10-4 (IMP/BPS) and 2 × 10-4 (IMP/V-V) to 1 × 10-3 for PDM-QPSK, and from 1 × 10-4 (IMP/BPS) to 3 × 10-4 for PDM-16QAM, respectively, at the 1 dB optical signal-to-noise ratio (OSNR) penalty. The proposed Kalman filter also shows a fast convergence with only 100 symbols and much lower computational complexity.

  4. New management algorithms in multiple sclerosis

    DEFF Research Database (Denmark)

    Sorensen, Per Soelberg

    2014-01-01

    complex. The purpose of the review has been to work out new management algorithms for treatment of relapsing-remitting multiple sclerosis including new oral therapies and therapeutic monoclonal antibodies. RECENT FINDINGS: Recent large placebo-controlled trials in relapsing-remitting multiple sclerosis......PURPOSE OF REVIEW: Our current treatment algorithms include only IFN-β and glatiramer as available first-line disease-modifying drugs and natalizumab and fingolimod as second-line therapies. Today, 10 drugs have been approved in Europe and nine in the United States making the choice of therapy more...

  5. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    -D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes......We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...... these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution...

  6. Improving integrity of on-line grammage measurement with traceable basic calibration.

    Science.gov (United States)

    Kangasrääsiö, Juha

    2010-07-01

    The automatic control of grammage (basis weight) in paper and board production is based upon on-line grammage measurement. Furthermore, the automatic control of other quality variables such as moisture, ash content and coat weight, may rely on the grammage measurement. The integrity of Kr-85 based on-line grammage measurement systems was studied, by performing basic calibrations with traceably calibrated plastic reference standards. The calibrations were performed according to the EN ISO/IEC 17025 standard, which is a requirement for calibration laboratories. The observed relative measurement errors were 3.3% in the first time calibrations at the 95% confidence level. With the traceable basic calibration method, however, these errors can be reduced to under 0.5%, thus improving the integrity of on-line grammage measurements. Also a standardised algorithm, based on the experience from the performed calibrations, is proposed to ease the adjustment of the different grammage measurement systems. The calibration technique can basically be applied to all beta-radiation based grammage measurements. 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Structural damage diagnosis based on on-line recursive stochastic subspace identification

    International Nuclear Information System (INIS)

    Loh, Chin-Hsiung; Weng, Jian-Huang; Liu, Yi-Cheng; Lin, Pei-Yang; Huang, Shieh-Kung

    2011-01-01

    This paper presents a recursive stochastic subspace identification (RSSI) technique for on-line and almost real-time structural damage diagnosis using output-only measurements. Through RSSI the time-varying natural frequencies of a system can be identified. To reduce the computation time in conducting LQ decomposition in RSSI, the Givens rotation as well as the matrix operation appending a new data set are derived. The relationship between the size of the Hankel matrix and the data length in each shifting moving window is examined so as to extract the time-varying features of the system without loss of generality and to establish on-line and almost real-time system identification. The result from the RSSI technique can also be applied to structural damage diagnosis. Off-line data-driven stochastic subspace identification was used first to establish the system matrix from the measurements of an undamaged (reference) case. Then the RSSI technique incorporating a Kalman estimator is used to extract the dynamic characteristics of the system through continuous monitoring data. The predicted residual error is defined as a damage feature and through the outlier statistics provides an indicator of damage. Verification of the proposed identification algorithm by using the bridge scouring test data and white noise response data of a reinforced concrete frame structure is conducted

  8. On line surveillance of large systems: applications to nuclear and chemical plant

    International Nuclear Information System (INIS)

    Zwingelstein, G.

    1978-01-01

    An on line surveillance method for large scale and distributed parameter systems is achieved by comparing in real time the internal physical parameter values to the reference values. It is shown that the following steps are necessary: modeling, model validation using dynamic testing and on line estimation of parameters. For large scale systems where only few outputs are measurable, an estimation algorithm was developed, selecting the measurable output giving the minimum variance of the physical parameters. This estimation scheme uses a quasilinearization technique associated to the sensitivity equation and the recursive least squares techniques. For large scale systems of order greater than 100, two versions of the estimation scheme are proposed to decrease the computation time. An application to a nuclear reactor core (state variable model of order 29) is proposed and used real data. For distributed systems the estimation scheme was developed with either measurements at fixed time or at fixed space. The estimation algorithm selects the set of measurements that gives the minimum variance of the estimates. An application to a liquid-liquid extraction column, modelized by a set of four coupled partial differential equations, demonstrates the efficiency of the method

  9. Optimizing the Performance of Radionuclide Identification Software in the Hunt for Nuclear Security Threats

    International Nuclear Information System (INIS)

    Fotion, Katherine A.

    2016-01-01

    The Radionuclide Analysis Kit (RNAK), my team's most recent nuclide identification software, is entering the testing phase. A question arises: will removing rare nuclides from the software's library improve its overall performance? An affirmative response indicates fundamental errors in the software's framework, while a negative response confirms the effectiveness of the software's key machine learning algorithms. After thorough testing, I found that the performance of RNAK cannot be improved with the library choice effect, thus verifying the effectiveness of RNAK's algorithms - multiple linear regression, Bayesian network using the Viterbi algorithm, and branch and bound search.

  10. Comparing the Hierarchy of Keywords in On-Line News Portals.

    Science.gov (United States)

    Tibély, Gergely; Sousa-Rodrigues, David; Pollner, Péter; Palla, Gergely

    2016-01-01

    Hierarchical organization is prevalent in networks representing a wide range of systems in nature and society. An important example is given by the tag hierarchies extracted from large on-line data repositories such as scientific publication archives, file sharing portals, blogs, on-line news portals, etc. The tagging of the stored objects with informative keywords in such repositories has become very common, and in most cases the tags on a given item are free words chosen by the authors independently. Therefore, the relations among keywords appearing in an on-line data repository are unknown in general. However, in most cases the topics and concepts described by these keywords are forming a latent hierarchy, with the more general topics and categories at the top, and more specialized ones at the bottom. There are several algorithms available for deducing this hierarchy from the statistical features of the keywords. In the present work we apply a recent, co-occurrence-based tag hierarchy extraction method to sets of keywords obtained from four different on-line news portals. The resulting hierarchies show substantial differences not just in the topics rendered as important (being at the top of the hierarchy) or of less interest (categorized low in the hierarchy), but also in the underlying network structure. This reveals discrepancies between the plausible keyword association frameworks in the studied news portals.

  11. Comparing the Hierarchy of Keywords in On-Line News Portals.

    Directory of Open Access Journals (Sweden)

    Gergely Tibély

    Full Text Available Hierarchical organization is prevalent in networks representing a wide range of systems in nature and society. An important example is given by the tag hierarchies extracted from large on-line data repositories such as scientific publication archives, file sharing portals, blogs, on-line news portals, etc. The tagging of the stored objects with informative keywords in such repositories has become very common, and in most cases the tags on a given item are free words chosen by the authors independently. Therefore, the relations among keywords appearing in an on-line data repository are unknown in general. However, in most cases the topics and concepts described by these keywords are forming a latent hierarchy, with the more general topics and categories at the top, and more specialized ones at the bottom. There are several algorithms available for deducing this hierarchy from the statistical features of the keywords. In the present work we apply a recent, co-occurrence-based tag hierarchy extraction method to sets of keywords obtained from four different on-line news portals. The resulting hierarchies show substantial differences not just in the topics rendered as important (being at the top of the hierarchy or of less interest (categorized low in the hierarchy, but also in the underlying network structure. This reveals discrepancies between the plausible keyword association frameworks in the studied news portals.

  12. Algorithm for advanced canonical coding of planar chemical structures that considers stereochemical and symmetric information.

    Science.gov (United States)

    Koichi, Shungo; Iwata, Satoru; Uno, Takeaki; Koshino, Hiroyuki; Satoh, Hiroko

    2007-01-01

    We describe a rigorous and fast algorithm for advanced canonical coding of planar chemical structures based on the algorithm of Faulon et al. (J. Chem. Inf. Comput. Sci. 2004, 44, 427-436). Our algorithm works well even for highly symmetric structures; moreover, an advantage of our algorithm includes providing a rigorous canonical numbering of atoms with a consideration of stereochemistry and recognizing symmetric moieties. The planar structural line notation with the canonical numbering is also fit for use with stereochemical line notation. These capabilities are usable for general purposes in chemical structural coding and are particularly essential for detecting equivalent atoms in NMR studies. This algorithm was implemented on a 13C NMR chemical shift prediction system CAST/CNMR. Applications of the algorithm to several organic compounds demonstrate the practical efficiency of the rigorous coding.

  13. On-line analysis of reactor noise using time-series analysis

    International Nuclear Information System (INIS)

    McGevna, V.G.

    1981-10-01

    A method to allow use of time series analysis for on-line noise analysis has been developed. On-line analysis of noise in nuclear power reactors has been limited primarily to spectral analysis and related frequency domain techniques. Time series analysis has many distinct advantages over spectral analysis in the automated processing of reactor noise. However, fitting an autoregressive-moving average (ARMA) model to time series data involves non-linear least squares estimation. Unless a high speed, general purpose computer is available, the calculations become too time consuming for on-line applications. To eliminate this problem, a special purpose algorithm was developed for fitting ARMA models. While it is based on a combination of steepest descent and Taylor series linearization, properties of the ARMA model are used so that the auto- and cross-correlation functions can be used to eliminate the need for estimating derivatives. The number of calculations, per iteration varies lineegardless of the mee 0.2% yield strength displayed anisotropy, with axial and circumferential values being greater than radial. For CF8-CPF8 and CF8M-CPF8M castings to meet current ASME Code S acid fuel cells

  14. Joint optimization of maintenance, buffers and machines in manufacturing lines

    Science.gov (United States)

    Nahas, Nabil; Nourelfath, Mustapha

    2018-01-01

    This article considers a series manufacturing line composed of several machines separated by intermediate buffers of finite capacity. The goal is to find the optimal number of preventive maintenance actions performed on each machine, the optimal selection of machines and the optimal buffer allocation plan that minimize the total system cost, while providing the desired system throughput level. The mean times between failures of all machines are assumed to increase when applying periodic preventive maintenance. To estimate the production line throughput, a decomposition method is used. The decision variables in the formulated optimal design problem are buffer levels, types of machines and times between preventive maintenance actions. Three heuristic approaches are developed to solve the formulated combinatorial optimization problem. The first heuristic consists of a genetic algorithm, the second is based on the nonlinear threshold accepting metaheuristic and the third is an ant colony system. The proposed heuristics are compared and their efficiency is shown through several numerical examples. It is found that the nonlinear threshold accepting algorithm outperforms the genetic algorithm and ant colony system, while the genetic algorithm provides better results than the ant colony system for longer manufacturing lines.

  15. Next Day Building Load Predictions based on Limited Input Features Using an On-Line Laterally Primed Adaptive Resonance Theory Artificial Neural Network.

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Christian Birk [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Grid Integration Group; Robinson, Matt [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Mechanical Engineering; Yasaei, Yasser [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Electrical and Computer Engineering; Caudell, Thomas [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Electrical and Computer Engineering; Martinez-Ramon, Manel [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Electrical and Computer Engineering; Mammoli, Andrea [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Mechanical Engineering

    2016-07-01

    Optimal integration of thermal energy storage within commercial building applications requires accurate load predictions. Several methods exist that provide an estimate of a buildings future needs. Methods include component-based models and data-driven algorithms. This work implemented a previously untested algorithm for this application that is called a Laterally Primed Adaptive Resonance Theory (LAPART) artificial neural network (ANN). The LAPART algorithm provided accurate results over a two month period where minimal historical data and a small amount of input types were available. These results are significant, because common practice has often overlooked the implementation of an ANN. ANN have often been perceived to be too complex and require large amounts of data to provide accurate results. The LAPART neural network was implemented in an on-line learning manner. On-line learning refers to the continuous updating of training data as time occurs. For this experiment, training began with a singe day and grew to two months of data. This approach provides a platform for immediate implementation that requires minimal time and effort. The results from the LAPART algorithm were compared with statistical regression and a component-based model. The comparison was based on the predictions linear relationship with the measured data, mean squared error, mean bias error, and cost savings achieved by the respective prediction techniques. The results show that the LAPART algorithm provided a reliable and cost effective means to predict the building load for the next day.

  16. Off-Line Robust Constrained MPC for Linear Time-Varying Systems with Persistent Disturbances

    Directory of Open Access Journals (Sweden)

    P. Bumroongsri

    2014-01-01

    Full Text Available An off-line robust constrained model predictive control (MPC algorithm for linear time-varying (LTV systems is developed. A novel feature is the fact that both model uncertainty and bounded additive disturbance are explicitly taken into account in the off-line formulation of MPC. In order to reduce the on-line computational burdens, a sequence of explicit control laws corresponding to a sequence of positively invariant sets is computed off-line. At each sampling time, the smallest positively invariant set containing the measured state is determined and the corresponding control law is implemented in the process. The proposed MPC algorithm can guarantee robust stability while ensuring the satisfaction of input and output constraints. The effectiveness of the proposed MPC algorithm is illustrated by two examples.

  17. A Trajectory Correction based on Multi-Step Lining-up for the CLIC Main Linac

    CERN Document Server

    D'Amico, T E

    1999-01-01

    In the CLIC main linac it is very important to minimise the trajectory excursion and consequently the emittance dilution in order to obtain the required luminosity. Several algorithms have been proposed and lately the ballistic method has proved to be very effective. The trajectory method described in this Note retains the main advantages of the latter while adding some interesting features. It is based on the separation of the unknown variables like the quadrupole misalignments, the offset and slope of the injection straight line and the misalignments of the beam position monitors (BPM). This is achieved by referring the trajectory relatively to the injection line and not to the average pre-alignment line and by using two trajectories each corresponding to slightly different quadrupole strengths. A reference straight line is then derived onto which the beam is bent by a kick obtained by moving the first quadrupole. The other quadrupoles are then aligned on that line. The quality of the correction depends mai...

  18. Massively parallel algorithms for trace-driven cache simulations

    Science.gov (United States)

    Nicol, David M.; Greenberg, Albert G.; Lubachevsky, Boris D.

    1991-01-01

    Trace driven cache simulation is central to computer design. A trace is a very long sequence of reference lines from main memory. At the t(exp th) instant, reference x sub t is hashed into a set of cache locations, the contents of which are then compared with x sub t. If at the t sup th instant x sub t is not present in the cache, then it is said to be a miss, and is loaded into the cache set, possibly forcing the replacement of some other memory line, and making x sub t present for the (t+1) sup st instant. The problem of parallel simulation of a subtrace of N references directed to a C line cache set is considered, with the aim of determining which references are misses and related statistics. A simulation method is presented for the Least Recently Used (LRU) policy, which regradless of the set size C runs in time O(log N) using N processors on the exclusive read, exclusive write (EREW) parallel model. A simpler LRU simulation algorithm is given that runs in O(C log N) time using N/log N processors. Timings are presented of the second algorithm's implementation on the MasPar MP-1, a machine with 16384 processors. A broad class of reference based line replacement policies are considered, which includes LRU as well as the Least Frequently Used and Random replacement policies. A simulation method is presented for any such policy that on any trace of length N directed to a C line set runs in the O(C log N) time with high probability using N processors on the EREW model. The algorithms are simple, have very little space overhead, and are well suited for SIMD implementation.

  19. From Off-line to On-line Handwriting Recognition

    NARCIS (Netherlands)

    Lallican, P.; Viard-Gaudin, C.; Knerr, S.

    2004-01-01

    On-line handwriting includes more information on time order of the writing signal and on the dynamics of the writing process than off-line handwriting. Therefore, on-line recognition systems achieve higher recognition rates. This can be concluded from results reported in the literature, and has been

  20. Harmony Search for Balancing Two-sided Assembly Lines

    Directory of Open Access Journals (Sweden)

    Hindriyanto Dwi Purnomo

    2012-01-01

    Full Text Available Two-sided assembly lines balancing problems are important problem for large-sized products such as cars and buses, in which, tasks operations can be performed in the two sides of the line. In this paper, Harmony Search algorithm is proposed to solve two-sided assembly lines balancing problems type-I (TALBP-I. The proposed method adopts the COMSOAL heuristic and specific features of TALBP in the Harmony operators – the harmony memory consideration, random selection and pitch adjustment – in order to maintain the local and global search. The proposed method is evaluated based on 6 benchmark problems that are commonly used in TALBP. The experiment results show that the proposed method work well and produces better solution than the heuristic method and genetic algorithm.

  1. Proceedings of the 9. international symposium on power-line communications and its applications

    Energy Technology Data Exchange (ETDEWEB)

    Lampe, L. (comp.) [British Columbia Univ., Vancouver, BC (Canada). Dept. of Electrical and Computer Engineering

    2005-07-01

    The 2005 International Symposium on Power Line Communications and Its Applications (ISPLC 2005) is the leading international scientific conference on technology and applications for communication over power lines. The conference addresses the latest technological advances in power-line communications and current and future applications of power-line communication systems including broadband Internet access, indoor home networking, power-line based communications in vehicles, power-line control networks, and automatic meter reading systems. Specific conference papers included measurements, channel characterization and modeling; standards and regulations; electromagnetic compatibility; information and communication theory; modulation and error-control coding techniques; single carrier, OFDM, and spread spectrum techniques; detection, estimation, and iterative processing techniques; signal processing algorithms and devices; multiple-access techniques; modem and LSI design; networks and protocols; system architectures; automatic meter reading systems; applications and services; and, experimental systems and field trials. A total of 90 papers were featured and organized into 14 regular sessions and one poster session. Seven of these presentations have been catalogued separately for inclusion in this database. In addition to the technical program, 3 keynotes speeches and 2 panel discussions were presented and chaired by distinguished speakers and moderators. tabs., figs.

  2. Analysis of the type II robotic mixed-model assembly line balancing problem

    Science.gov (United States)

    Çil, Zeynel Abidin; Mete, Süleyman; Ağpak, Kürşad

    2017-06-01

    In recent years, there has been an increasing trend towards using robots in production systems. Robots are used in different areas such as packaging, transportation, loading/unloading and especially assembly lines. One important step in taking advantage of robots on the assembly line is considering them while balancing the line. On the other hand, market conditions have increased the importance of mixed-model assembly lines. Therefore, in this article, the robotic mixed-model assembly line balancing problem is studied. The aim of this study is to develop a new efficient heuristic algorithm based on beam search in order to minimize the sum of cycle times over all models. In addition, mathematical models of the problem are presented for comparison. The proposed heuristic is tested on benchmark problems and compared with the optimal solutions. The results show that the algorithm is very competitive and is a promising tool for further research.

  3. An alternative approach to spectrum base line estimation

    International Nuclear Information System (INIS)

    Bukvic, S.; Spasojevic, Dj.

    2005-01-01

    We present a new form of merit function which measures agreement between a large number of data and the model function with a particular choice of parameters. We demonstrate the efficiency of the proposed merit function on the common problem of finding the base line of a spectrum. When the base line is expected to be a horizontal straight line, the use of minimization algorithms is not necessary, i.e. the solution is achieved in a small number of steps. We discuss the advantages of the proposed merit function in general, when explicit use of a minimization algorithm is necessary. The hardcopy text is accompanied by an electronic archive, stored on the SAE homepage at http://www1.elsevier.com/homepage/saa/sab/content/lower.htm. The archive contains fully functional demo program with tutorial, examples and Visual Basic source code of the key subroutine

  4. A novel orthoimage mosaic method using a weighted A∗ algorithm - Implementation and evaluation

    Science.gov (United States)

    Zheng, Maoteng; Xiong, Xiaodong; Zhu, Junfeng

    2018-04-01

    The implementation and evaluation of a weighted A∗ algorithm for orthoimage mosaic with UAV (Unmanned Aircraft Vehicle) imagery is proposed. The initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is generated based on DSM (Digital Surface Model) data; the vertices (conjunction nodes of seam-lines) of the initial network are relocated if they are on high objects (buildings, trees and other artificial structures); and the initial seam-lines are refined using the weighted A∗ algorithm based on the edge diagram and the relocated vertices. Our method was tested with three real UAV datasets. Two quantitative terms are introduced to evaluate the results of the proposed method. Preliminary results show that the method is suitable for regular and irregular aligned UAV images for most terrain types (flat or mountainous areas), and is better than the state-of-the-art method in both quality and efficiency based on the test datasets.

  5. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  6. Pipeline leak detection and location by on-line-correlation with a process computer

    International Nuclear Information System (INIS)

    Siebert, H.; Isermann, R.

    1977-01-01

    A method for leak detection using a correlation technique in pipelines is described. For leak detection and also for leak localisation and estimation of the leak flow recursive estimation algorithms are used. The efficiency of the methods is demonstrated with a process computer and a pipeline model operating on-line. It is shown that very small leaks can be detected. (orig.) [de

  7. Symbol Stream Combining in a Convolutionally Coded System

    Science.gov (United States)

    Mceliece, R. J.; Pollara, F.; Swanson, L.

    1985-01-01

    Symbol stream combining has been proposed as a method for arraying signals received at different antennas. If convolutional coding and Viterbi decoding are used, it is shown that a Viterbi decoder based on the proposed weighted sum of symbol streams yields maximum likelihood decisions.

  8. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    Science.gov (United States)

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  9. On-Line Impact Load Identification

    Directory of Open Access Journals (Sweden)

    Krzysztof Sekuła

    2013-01-01

    Full Text Available The so-called Adaptive Impact Absorption (AIA is a research area of safety engineering devoted to problems of shock absorption in various unpredictable scenarios of collisions. It makes use of smart technologies (systems equipped with sensors, controllable dissipaters and specialised tools for signal processing. Examples of engineering applications for AIA systems are protective road barriers, automotive bumpers or adaptive landing gears. One of the most challenging problems for AIA systems is on-line identification of impact loads, which is crucial for introducing the optimum real-time strategy of adaptive impact absorption. This paper presents the concept of an impactometer and develops the methodology able to perform real-time impact load identification. Considered dynamic excitation is generated by a mass M1 impacting with initial velocity V0. An analytical formulation of the problem, supported with numerical simulations and experimental verifications is presented. Two identification algorithms based on measured response of the impacted structure are proposed and discussed. Finally, a concept of the AIA device utilizing the idea of impactometer is briefly presented.

  10. Genetic algorithms and supernovae type Ia analysis

    International Nuclear Information System (INIS)

    Bogdanos, Charalampos; Nesseris, Savvas

    2009-01-01

    We introduce genetic algorithms as a means to analyze supernovae type Ia data and extract model-independent constraints on the evolution of the Dark Energy equation of state w(z) ≡ P DE /ρ DE . Specifically, we will give a brief introduction to the genetic algorithms along with some simple examples to illustrate their advantages and finally we will apply them to the supernovae type Ia data. We find that genetic algorithms can lead to results in line with already established parametric and non-parametric reconstruction methods and could be used as a complementary way of treating SNIa data. As a non-parametric method, genetic algorithms provide a model-independent way to analyze data and can minimize bias due to premature choice of a dark energy model

  11. Solution of single linear tridiagonal systems and vectorization of the ICCG algorithm on the Cray 1

    International Nuclear Information System (INIS)

    Kershaw, D.S.

    1981-01-01

    The numerical algorithms used to solve the physics equation in codes which model laser fusion are examined, it is found that a large number of subroutines require the solution of tridiagonal linear systems of equations. One dimensional radiation transport, thermal and suprathermal electron transport, ion thermal conduction, charged particle and neutron transport, all require the solution of tridiagonal systems of equations. The standard algorithm that has been used in the past on CDC 7600's will not vectorize and so cannot take advantage of the large speed increases possible on the Cray-1 through vectorization. There is however, an alternate algorithm for solving tridiagonal systems, called cyclic reduction, which allows for vectorization, and which is optimal for the Cray-1. Software based on this algorithm is now being used in LASNEX to solve tridiagonal linear systems in the subroutines mentioned above. The new algorithm runs as much as five times faster than the standard algorithm on the Cray-1. The ICCG method is being used to solve the diffusion equation with a nine-point coupling scheme on the CDC 7600. In going from the CDC 7600 to the Cray-1, a large part of the algorithm consists of solving tridiagonal linear systems on each L line of the Lagrangian mesh in a manner which is not vectorizable. An alternate ICCG algorithm for the Cray-1 was developed which utilizes a block form of the cyclic reduction algorithm. This new algorithm allows full vectorization and runs as much as five times faster than the old algorithm on the Cray-1. It is now being used in Cray LASNEX to solve the two-dimensional diffusion equation in all the physics subroutines mentioned above

  12. Distributed power-line outage detection based on wide area measurement system.

    Science.gov (United States)

    Zhao, Liang; Song, Wen-Zhan

    2014-07-21

    In modern power grids, the fast and reliable detection of power-line outages is an important functionality, which prevents cascading failures and facilitates an accurate state estimation to monitor the real-time conditions of the grids. However, most of the existing approaches for outage detection suffer from two drawbacks, namely: (i) high computational complexity; and (ii) relying on a centralized means of implementation. The high computational complexity limits the practical usage of outage detection only for the case of single-line or double-line outages. Meanwhile, the centralized means of implementation raises security and privacy issues. Considering these drawbacks, the present paper proposes a distributed framework, which carries out in-network information processing and only shares estimates on boundaries with the neighboring control areas. This novel framework relies on a convex-relaxed formulation of the line outage detection problem and leverages the alternating direction method of multipliers (ADMM) for its distributed solution. The proposed framework invokes a low computational complexity, requiring only linear and simple matrix-vector operations. We also extend this framework to incorporate the sparse property of the measurement matrix and employ the LSQRalgorithm to enable a warm start, which further accelerates the algorithm. Analysis and simulation tests validate the correctness and effectiveness of the proposed approaches.

  13. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  14. Multi-scale Clustering of Points Synthetically Considering Lines and Polygons Distribution

    Directory of Open Access Journals (Sweden)

    YU Li

    2015-10-01

    Full Text Available Considering the complexity and discontinuity of spatial data distribution, a clustering algorithm of points was proposed. To accurately identify and express the spatial correlation among points,lines and polygons, a Voronoi diagram that is generated by all spatial features is introduced. According to the distribution characteristics of point's position, an area threshold used to control clustering granularity was calculated. Meanwhile, judging scale convergence by constant area threshold, the algorithm classifies spatial features based on multi-scale, with an O(n log n running time.Results indicate that spatial scale converges self-adaptively according with distribution of points.Without the custom parameters, the algorithm capable to discover arbitrary shape clusters which be bound by lines and polygons, and is robust for outliers.

  15. A Review of Algorithms for Retinal Vessel Segmentation

    Directory of Open Access Journals (Sweden)

    Monserrate Intriago Pazmiño

    2014-10-01

    Full Text Available This paper presents a review of algorithms for extracting blood vessels network from retinal images. Since retina is a complex and delicate ocular structure, a huge effort in computer vision is devoted to study blood vessels network for helping the diagnosis of pathologies like diabetic retinopathy, hypertension retinopathy, retinopathy of prematurity or glaucoma. To carry out this process many works for normal and abnormal images have been proposed recently. These methods include combinations of algorithms like Gaussian and Gabor filters, histogram equalization, clustering, binarization, motion contrast, matched filters, combined corner/edge detectors, multi-scale line operators, neural networks, ants, genetic algorithms, morphological operators. To apply these algorithms pre-processing tasks are needed. Most of these algorithms have been tested on publicly retinal databases. We have include a table summarizing algorithms and results of their assessment.

  16. Experimental Results of Novel DoA Estimation Algorithms for Compact Reconfigurable Antennas

    Directory of Open Access Journals (Sweden)

    Henna Paaso

    2017-01-01

    Full Text Available Reconfigurable antenna systems have gained much attention for potential use in the next generation wireless systems. However, conventional direction-of-arrival (DoA estimation algorithms for antenna arrays cannot be used directly in reconfigurable antennas due to different design of the antennas. In this paper, we present an adjacent pattern power ratio (APPR algorithm for two-port composite right/left-handed (CRLH reconfigurable leaky-wave antennas (LWAs. Additionally, we compare the performances of the APPR algorithm and LWA-based MUSIC algorithms. We study how the computational complexity and the performance of the algorithms depend on number of selected radiation patterns. In addition, we evaluate the performance of the APPR and MUSIC algorithms with numerical simulations as well as with real world indoor measurements having both line-of-sight and non-line-of-sight components. Our performance evaluations show that the DoA estimates are in a considerably good agreement with the real DoAs, especially with the APPR algorithm. In summary, the APPR and MUSIC algorithms for DoA estimation along with the planar and compact LWA layout can be a valuable solution to enhance the performance of the wireless communication in the next generation systems.

  17. Line-Enhanced Deformable Registration of Pulmonary Computed Tomography Images Before and After Radiation Therapy With Radiation-Induced Fibrosis

    Science.gov (United States)

    Sensakovic, William F.; Maxim, Peter; Diehn, Maximilian; Loo, Billy W.; Xing, Lei

    2018-01-01

    Purpose: The deformable registration of pulmonary computed tomography images before and after radiation therapy is challenging due to anatomic changes from radiation fibrosis. We hypothesize that a line-enhanced registration algorithm can reduce landmark error over the entire lung, including the irradiated regions, when compared to an intensity-based deformable registration algorithm. Materials: Two intensity-based B-spline deformable registration algorithms of pre-radiation therapy and post-radiation therapy images were compared. The first was a control intensity–based algorithm that utilized computed tomography images without modification. The second was a line enhancement algorithm that incorporated a Hessian-based line enhancement filter prior to deformable image registration. Registrations were evaluated based on the landmark error between user-identified landmark pairs and the overlap ratio. Results: Twenty-one patients with pre-radiation therapy and post-radiation therapy scans were included. The median time interval between scans was 1.2 years (range: 0.3-3.3 years). Median landmark errors for the line enhancement algorithm were significantly lower than those for the control algorithm over the entire lung (1.67 vs 1.83 mm; P 5 Gy (2.25 vs 3.31; P 5 Gy dose interval demonstrated a significant inverse relationship with post-radiation therapy fibrosis enhancement after line enhancement filtration (Pearson correlation coefficient = −0.48; P = .03). Conclusion: The line enhancement registration algorithm is a promising method for registering images before and after radiation therapy. PMID:29343206

  18. Optimizing the Performance of Radionuclide Identification Software in the Hunt for Nuclear Security Threats

    Energy Technology Data Exchange (ETDEWEB)

    Fotion, Katherine A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-18

    The Radionuclide Analysis Kit (RNAK), my team’s most recent nuclide identification software, is entering the testing phase. A question arises: will removing rare nuclides from the software’s library improve its overall performance? An affirmative response indicates fundamental errors in the software’s framework, while a negative response confirms the effectiveness of the software’s key machine learning algorithms. After thorough testing, I found that the performance of RNAK cannot be improved with the library choice effect, thus verifying the effectiveness of RNAK’s algorithms—multiple linear regression, Bayesian network using the Viterbi algorithm, and branch and bound search.

  19. Design of Meander-Line Antennas for Radio Frequency Identification Based on Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    X. L. Travassos

    2012-01-01

    Full Text Available This paper presents optimization problem formulations to design meander-line antennas for passive UHF radio frequency identification tags based on given specifications of input impedance, frequency range, and geometric constraints. In this application, there is a need for directive transponders to select properly the target tag, which in turn must be ideally isotropic. The design of an effective meander-line antenna for RFID purposes requires balancing geometrical characteristics with the microchip impedance. Therefore, there is an issue of optimization in determining the antenna parameters for best performance. The antenna is analyzed by a method of moments. Some results using a deterministic optimization algorithm are shown.

  20. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  1. Optimal siting of capacitors in radial distribution network using Whale Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    D.B. Prakash

    2017-12-01

    Full Text Available In present days, continuous effort is being made in bringing down the line losses of the electrical distribution networks. Therefore proper allocation of capacitors is of utmost importance because, it will help in reducing the line losses and maintaining the bus voltage. This in turn results in improving the stability and reliability of the system. In this paper Whale Optimization Algorithm (WOA is used to find optimal sizing and placement of capacitors for a typical radial distribution system. Multi objectives such as operating cost reduction and power loss minimization with inequality constraints on voltage limits are considered and the proposed algorithm is validated by applying it on standard radial systems: IEEE-34 bus and IEEE-85 bus radial distribution test systems. The results obtained are compared with those of existing algorithms. The results show that the proposed algorithm is more effective in bringing down the operating costs and in maintaining better voltage profile. Keywords: Whale Optimization Algorithm (WOA, Optimal allocation and sizing of capacitors, Power loss reduction and voltage stability improvement, Radial distribution system, Operating cost minimization

  2. A General Event Location Algorithm with Applications to Eclipse and Station Line-of-Sight

    Science.gov (United States)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  3. A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight

    Science.gov (United States)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  4. Investigation of the stochastic subspace identification method for on-line wind turbine tower monitoring

    Science.gov (United States)

    Dai, Kaoshan; Wang, Ying; Lu, Wensheng; Ren, Xiaosong; Huang, Zhenhua

    2017-04-01

    Structural health monitoring (SHM) of wind turbines has been applied in the wind energy industry to obtain their real-time vibration parameters and to ensure their optimum performance. For SHM, the accuracy of its results and the efficiency of its measurement methodology and data processing algorithm are the two major concerns. Selection of proper measurement parameters could improve such accuracy and efficiency. The Stochastic Subspace Identification (SSI) is a widely used data processing algorithm for SHM. This research discussed the accuracy and efficiency of SHM using SSI method to identify vibration parameters of on-line wind turbine towers. Proper measurement parameters, such as optimum measurement duration, are recommended.

  5. Envelope detection using temporal magnetization dynamics of resonantly interacting spin-torque oscillator

    Science.gov (United States)

    Nakamura, Y.; Nishikawa, M.; Osawa, H.; Okamoto, Y.; Kanao, T.; Sato, R.

    2018-05-01

    In this article, we propose the detection method of the recorded data pattern by the envelope of the temporal magnetization dynamics of resonantly interacting spin-torque oscillator on the microwave assisted magnetic recording for three-dimensional magnetic recording. We simulate the envelope of the waveform from recorded dots with the staggered magnetization configuration, which are calculated by using a micromagnetic simulation. We study the data detection methods for the envelope and propose a soft-output Viterbi algorithm (SOVA) for partial response (PR) system as a signal processing system for three dimensional magnetic recording.

  6. Parallel algorithms for unconstrained optimization by multisplitting with inexact subspace search - the abstract

    Energy Technology Data Exchange (ETDEWEB)

    Renaut, R.; He, Q. [Arizona State Univ., Tempe, AZ (United States)

    1994-12-31

    In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.

  7. Inverse kinetics equations for on line measurement of reactivity using personal computer

    International Nuclear Information System (INIS)

    Ratemi, Wajdi; El Gadamsi, Walied; Beleid, Abdul Kariem

    1993-01-01

    Computer with their astonishing speed of calculations along with their easy connection to real systems, are very appropriate for digital measurements of real system variables. In the nuclear industry, such computer application will produce compact control rooms of real power plants, where information and results display can be obtained through push button concept. In our study, we use two personal computers for the purpose of simulation and measurement. One of them is used as a digital simulator to a real reactor, where we effectively simulate the reactor power through a cross talk network. The computed power is passed at certain chosen sampling time to the other computer. The purpose of the other computer is to use the inverse kinetics equations to calculate the reactivity parameter based on the received power and then it performs on line display of the power curve and the reactivity curve using color graphics. In this study, we use the one group version of the inverse kinetics algorithm which can easily be extended to larger group version. The language of programming used in Turbo BASIC, which is very comparable, in terms of efficiency, to FORTRAN language, besides its effective graphics routines. With the use of the extended version of the Inverse Kinetics algorithm, we can effectively apply this techniques of measurement for the purpose of on line display of the reactivity of the Tajoura Research Reactor. (author)

  8. The Algorithm of Link Prediction on Social Network

    Directory of Open Access Journals (Sweden)

    Liyan Dong

    2013-01-01

    Full Text Available At present, most link prediction algorithms are based on the similarity between two entities. Social network topology information is one of the main sources to design the similarity function between entities. But the existing link prediction algorithms do not apply the network topology information sufficiently. For lack of traditional link prediction algorithms, we propose two improved algorithms: CNGF algorithm based on local information and KatzGF algorithm based on global information network. For the defect of the stationary of social network, we also provide the link prediction algorithm based on nodes multiple attributes information. Finally, we verified these algorithms on DBLP data set, and the experimental results show that the performance of the improved algorithm is superior to that of the traditional link prediction algorithm.

  9. MONICA - an on-line track following microprocessor in high energy physics experiments

    International Nuclear Information System (INIS)

    Wermes, N.; Schildt, P.; Stuckenberg, H.J.

    1980-02-01

    In the storage ring experiments at the PETRA accelerator, large cylindrical detectors with thousands of channels are used. The maximum event rate is 500 000 per second, i.e. effective preprocessing is required. In the TASSO detector this is achieved in two steps: A fast trigger system is able to make within 1 microsecond a decision, whether it is a useful event or not. A positive decision starts an on-line track-following ECL computer together with fast associative memories and table look-up. It is a microprogrammed on-line track analyzer called MONICA, which follows up to 10 tracks within 1 millisecond and calculates the coordinates in the R,phi-plane. CAMAC equipment is used for the input of raw data and the output of the calculated track coordinates, speed of the system is ensured by the computer. An outline of the algorithm used and the features of MONICA are given. As far as we know MONICA is the first running on-line track following free programmable microporcessor used in storage ring experiments. (orig.)

  10. Evaluation on correction factor for in-line X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Mingli; Huang, Zhifeng; Zhang, Li; Zhang, Ran [Tsinghua Univ., Beijing (China). Dept. of Engineering Physics; Ministry of Education, Beijing (China). Key Laboratory of Particle and Radiation Imaging; Yin, Hongxia; Liu, Yunfu; Wang, Zhenchang [Capital Medical Univ., Beijing (China). Medical Imaging Center; Xiao, Tiqiao [Chinese Academy of Sciences, Shanghai (China). Shanghai Inst. of Applied Physics

    2011-07-01

    X-ray in-line phase contrast computed tomography (CT) is an effective nondestructive tool, providing 3D distribution of the refractive index of weakly absorbing low-Z object with high resolution and image contrast, especially with high-brilliance third-generation synchrotron radiation sources. Modified Bronnikov's algorithm (MBA), one of the in-line phase contrast CT reconstruction algorithms, can reconstruct the refractive index distribution of a pure phase object with a single computed tomographic data set. The key idea of the MBA is to use a correction factor in the filter function to stabilize the behavior at low frequencies. In this paper, we evaluate the influences of the correction factor to the final reconstruction results of the absorption-phase-mixed objects with analytical simulation and actual experiments. The limitations of the MBA are discussed finally. (orig.)

  11. Space mapping optimization algorithms for engineering design

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    A simple, efficient optimization algorithm based on space mapping (SM) is presented. It utilizes input SM to reduce the misalignment between the coarse and fine models of the optimized object over a region of interest, and output space mapping (OSM) to ensure matching of response and first...... to a benchmark problem. In comparison with SMIS, the models presented are simple and have a small number of parameters that need to be extracted. The new algorithm is applied to the optimization of coupled-line band-pass filter....

  12. The development of a 3D mesoscopic model of metallic foam based on an improved watershed algorithm

    Science.gov (United States)

    Zhang, Jinhua; Zhang, Yadong; Wang, Guikun; Fang, Qin

    2018-06-01

    The watershed algorithm has been used widely in the x-ray computed tomography (XCT) image segmentation. It provides a transformation defined on a grayscale image and finds the lines that separate adjacent images. However, distortion occurs in developing a mesoscopic model of metallic foam based on XCT image data. The cells are oversegmented at some events when the traditional watershed algorithm is used. The improved watershed algorithm presented in this paper can avoid oversegmentation and is composed of three steps. Firstly, it finds all of the connected cells and identifies the junctions of the corresponding cell walls. Secondly, the image segmentation is conducted to separate the adjacent cells. It generates the lost cell walls between the adjacent cells. Optimization is then performed on the segmentation image. Thirdly, this improved algorithm is validated when it is compared with the image of the metallic foam, which shows that it can avoid the image segmentation distortion. A mesoscopic model of metallic foam is thus formed based on the improved algorithm, and the mesoscopic characteristics of the metallic foam, such as cell size, volume and shape, are identified and analyzed.

  13. A novel, optical, on-line bacteria sensor for monitoring drinking water quality.

    Science.gov (United States)

    Højris, Bo; Christensen, Sarah Christine Boesgaard; Albrechtsen, Hans-Jørgen; Smith, Christian; Dahlqvist, Mathis

    2016-04-04

    Today, microbial drinking water quality is monitored through either time-consuming laboratory methods or indirect on-line measurements. Results are thus either delayed or insufficient to support proactive action. A novel, optical, on-line bacteria sensor with a 10-minute time resolution has been developed. The sensor is based on 3D image recognition, and the obtained pictures are analyzed with algorithms considering 59 quantified image parameters. The sensor counts individual suspended particles and classifies them as either bacteria or abiotic particles. The technology is capable of distinguishing and quantifying bacteria and particles in pure and mixed suspensions, and the quantification correlates with total bacterial counts. Several field applications have demonstrated that the technology can monitor changes in the concentration of bacteria, and is thus well suited for rapid detection of critical conditions such as pollution events in drinking water.

  14. Quantum walks and search algorithms

    CERN Document Server

    Portugal, Renato

    2013-01-01

    This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...

  15. 3rd Computer Science On-line Conference

    CERN Document Server

    Senkerik, Roman; Oplatkova, Zuzana; Silhavy, Petr; Prokopova, Zdenka

    2014-01-01

    This book is based on the research papers presented in the 3rd Computer Science On-line Conference 2014 (CSOC 2014).   The conference is intended to provide an international forum for discussions on the latest high-quality research results in all areas related to Computer Science. The topics addressed are the theoretical aspects and applications of Artificial Intelligences, Computer Science, Informatics and Software Engineering.   The authors provide new approaches and methods to real-world problems, and in particular, exploratory research that describes novel approaches in their field. Particular emphasis is laid on modern trends in selected fields of interest. New algorithms or methods in a variety of fields are also presented.   This book is divided into three sections and covers topics including Artificial Intelligence, Computer Science and Software Engineering. Each section consists of new theoretical contributions and applications which can be used for the further development of knowledge of everybod...

  16. Capacitive Coupling in Double-Circuit Transmission Lines

    Directory of Open Access Journals (Sweden)

    Zdenka Benesova

    2004-01-01

    Full Text Available The paper describes an algorithm for calculation of capacitances and charges on conductors in systems with earth wires and in double-circuit overhead lines with respect to phase arrangement. A balanced voltage system is considered. A suitable transposition of individual conductors enables to reduce the electric and magnetic fields in vicinity of overhead lines and to limit the inductive and capacitive linkage. The procedure is illustrated on examples the results of which lead to particular recommendations for designers.

  17. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    Science.gov (United States)

    Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

    1988-01-01

    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

  18. Routing algorithms in networks-on-chip

    CERN Document Server

    Daneshtalab, Masoud

    2014-01-01

    This book provides a single-source reference to routing algorithms for Networks-on-Chip (NoCs), as well as in-depth discussions of advanced solutions applied to current and next generation, many core NoC-based Systems-on-Chip (SoCs). After a basic introduction to the NoC design paradigm and architectures, routing algorithms for NoC architectures are presented and discussed at all abstraction levels, from the algorithmic level to actual implementation.  Coverage emphasizes the role played by the routing algorithm and is organized around key problems affecting current and next generation, many-core SoCs. A selection of routing algorithms is included, specifically designed to address key issues faced by designers in the ultra-deep sub-micron (UDSM) era, including performance improvement, power, energy, and thermal issues, fault tolerance and reliability.   ·         Provides a comprehensive overview of routing algorithms for Networks-on-Chip and NoC-based, manycore systems; ·         Describe...

  19. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction

    Directory of Open Access Journals (Sweden)

    Darko Brodić

    2010-05-01

    Full Text Available Text line segmentation is an essential stage in off-line optical character recognition (OCR systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.

  20. Novel image reconstruction algorithm for multi-phase flow tomography system using γ ray method

    International Nuclear Information System (INIS)

    Hao Kuihong; Wang Huaxiang; Gao Mei

    2007-01-01

    After analyzing the reason of image reconstructed algorithm by using the conventional back projection (IBP) is prone to produce spurious line, and considering the characteristic of multi-phase flow tomography, a novel image reconstruction algorithm is proposed, which carries out the intersection calculation using back projection data. This algorithm can obtain a perfect system point spread function, and can eliminate spurious line better. Simulating results show that the algorithm is effective for identifying multi-phase flow pattern. (authors)

  1. The track finding algorithm of the Belle II vertex detectors

    Directory of Open Access Journals (Sweden)

    Bilka Tadeas

    2017-01-01

    Full Text Available The Belle II experiment is a high energy multi purpose particle detector operated at the asymmetric e+e− - collider SuperKEKB in Tsukuba (Japan. In this work we describe the algorithm performing the pattern recognition for inner tracking detector which consists of two layers of pixel detectors and four layers of double sided silicon strip detectors arranged around the interaction region. The track finding algorithm will be used both during the High Level Trigger on-line track reconstruction and during the off-line full reconstruction. It must provide good efficiency down to momenta as low as 50 MeV/c where material effects are sizeable even in an extremely thin detector as the VXD. In addition it has to be able to cope with the high occupancy of the Belle II detectors due to the background. The underlying concept of the track finding algorithm, as well as details of the implementation are outlined. The algorithm is proven to run with good performance on simulated ϒ(4S → BB̄ events with an efficiency for reconstructing tracks of above 90% over a wide range of momentum.

  2. ON CONSTRUCTION OF A RELIABLE GROUND TRUTH FOR EVALUATION OF VISUAL SLAM ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Jan Bayer

    2016-11-01

    Full Text Available In this work we are concerning the problem of localization accuracy evaluation of visual-based Simultaneous Localization and Mapping (SLAM techniques. Quantitative evaluation of the SLAM algorithm performance is usually done using the established metrics of Relative pose error and Absolute trajectory error which require a precise and reliable ground truth. Such a ground truth is usually hard to obtain, while it requires an expensive external localization system. In this work we are proposing to use the SLAM algorithm itself to construct a reliable ground-truth by offline frame-by-frame processing. The generated ground-truth is suitable for evaluation of different SLAM systems, as well as for tuning the parametrization of the on-line SLAM. The presented practical experimental results indicate the feasibility of the proposed approach.

  3. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  4. Extended-Maxima Transform Watershed Segmentation Algorithm for Touching Corn Kernels

    Directory of Open Access Journals (Sweden)

    Yibo Qin

    2013-01-01

    Full Text Available Touching corn kernels are usually oversegmented by the traditional watershed algorithm. This paper proposes a modified watershed segmentation algorithm based on the extended-maxima transform. Firstly, a distance-transformed image is processed by the extended-maxima transform in the range of the optimized threshold value. Secondly, the binary image obtained by the preceding process is run through the watershed segmentation algorithm, and watershed ridge lines are superimposed on the original image, so that touching corn kernels are separated into segments. Fifty images which all contain 400 corn kernels were tested. Experimental results showed that the effect of segmentation is satisfactory by the improved algorithm, and the accuracy of segmentation is as high as 99.87%.

  5. Optimization of an on-line function generation

    CERN Document Server

    Versteele, C

    1977-01-01

    A particular example of process control is the analog function generator of the CERN proton synchrotron. For magnet field correction, some magnets have to follow reference currents synchronous with the pulses of the synchrotron. Depending on the dynamic behaviour of the magnet system to be controlled, the output current will show a somewhat distorted image of the reference current used as input. An on-line computer strategy has been designed to compensate the distortions: i.e. to adjust the input function at successive pulses in such a way that the output current follows the reference as closely as required. However, the identification of such a system is not carried out because it is difficult and singular in any case. Modifications cannot be computed directly, but will result from an iterative strategy: the closed-loop adaptive control is based on a maximization procedure, requiring little memory and no gradient information. The algorithm is a variant of the optimization method called extended sequential se...

  6. Common lines modeling for reference free Ab-initio reconstruction in cryo-EM.

    Science.gov (United States)

    Greenberg, Ido; Shkolnisky, Yoel

    2017-11-01

    We consider the problem of estimating an unbiased and reference-free ab initio model for non-symmetric molecules from images generated by single-particle cryo-electron microscopy. The proposed algorithm finds the globally optimal assignment of orientations that simultaneously respects all common lines between all images. The contribution of each common line to the estimated orientations is weighted according to a statistical model for common lines' detection errors. The key property of the proposed algorithm is that it finds the global optimum for the orientations given the common lines. In particular, any local optima in the common lines energy landscape do not affect the proposed algorithm. As a result, it is applicable to thousands of images at once, very robust to noise, completely reference free, and not biased towards any initial model. A byproduct of the algorithm is a set of measures that allow to asses the reliability of the obtained ab initio model. We demonstrate the algorithm using class averages from two experimental data sets, resulting in ab initio models with resolutions of 20Å or better, even from class averages consisting of as few as three raw images per class. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. A modified backpropagation algorithm for training neural networks on data with error bars

    International Nuclear Information System (INIS)

    Gernoth, K.A.; Clark, J.W.

    1994-08-01

    A method is proposed for training multilayer feedforward neural networks on data contaminated with noise. Specifically, we consider the case that the artificial neural system is required to learn a physical mapping when the available values of the target variable are subject to experimental uncertainties, but are characterized by error bars. The proposed method, based on maximum likelihood criterion for parameter estimation, involves simple modifications of the on-line backpropagation learning algorithm. These include incorporation of the error-bar assignments in a pattern-specific learning rate, together with epochal updating of a new measure of model accuracy that replaces the usual mean-square error. The extended backpropagation algorithm is successfully tested on two problems relevant to the modelling of atomic-mass systematics by neural networks. Provided the underlying mapping is reasonably smooth, neural nets trained with the new procedure are able to learn the true function to a good approximation even in the presence of high levels of Gaussian noise. (author). 26 refs, 2 figs, 5 tabs

  8. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  9. Transputer networks for the on-line analysis of fine-grained electromagnetic calorimeter data

    International Nuclear Information System (INIS)

    Girotto, G.L.; Lanceri, L.; Scuri, F.; Zoppolato, E.

    1994-01-01

    Transputer networks, designed to perform parallel computations, are well suited for data acquisition, on-line analysis and second level trigger tasks in high energy physics experiments. Some simple algorithms for the analysis of fine-grained electromagnetic calorimeter data were implemented on two types of transputer networks and tested on real and simulated data from a silicon-tungsten calorimeter. Results are presented on the processing speed, measured in a test setup, and extrapolations to a full size detector and data acquisition system are discussed. ((orig.))

  10. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  11. Fault Identification Algorithm Based on Zone-Division Wide Area Protection System

    Directory of Open Access Journals (Sweden)

    Xiaojun Liu

    2014-04-01

    Full Text Available As the power grid becomes more magnified and complicated, wide-area protection system in the practical engineering application is more and more restricted by the communication level. Based on the concept of limitedness of wide-area protection system, the grid with complex structure is divided orderly in this paper, and fault identification and protection action are executed in each divided zone to reduce the pressure of the communication system. In protection zone, a new wide-area protection algorithm based on positive sequence fault components directional comparison principle is proposed. The special associated intelligent electronic devices (IEDs zones which contain buses and transmission lines are created according to the installation location of the IEDs. When a fault occurs, with the help of the fault information collecting and sharing from associated zones with the fault discrimination principle defined in this paper, the IEDs can identify the fault location and remove the fault according to the predetermined action strategy. The algorithm will not be impacted by the load changes and transition resistance and also has good adaptability in open phase running power system. It can be used as a main protection, and it also can be taken into account for the back-up protection function. The results of cases study show that, the division method of the wide-area protection system and the proposed algorithm are effective.

  12. Study of electric and magnetic fields on transmission lines using a computer simulation program

    International Nuclear Information System (INIS)

    Robelo Mojica, Nelson

    2011-01-01

    A study was conducted to determine and reduce levels of electric and magnetic fields with different configurations used by the Instituto Costarricense de Electricidad in power transmission lines in Costa Rica. The computer simulation program PLS-CADD with EPRI algorithm has been used to obtain field values close to those actual to lines easements that have worked to date. Different configurations have been compared on equal terms and the lowest levels of electric and magnetic fields are determined. The most appropriate configuration of the tower has been obtained and therefore has decreased exposure to electromagnetic fields people, without affecting the energy demand of the population. (author) [es

  13. Reinforcement Learning for Online Control of Evolutionary Algorithms

    NARCIS (Netherlands)

    Eiben, A.; Horvath, Mark; Kowalczyk, Wojtek; Schut, Martijn

    2007-01-01

    The research reported in this paper is concerned with assessing the usefulness of reinforcment learning (RL) for on-line calibration of parameters in evolutionary algorithms (EA). We are running an RL procedure and the EA simultaneously and the RL is changing the EA parameters on-the-fly. We

  14. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    Science.gov (United States)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  15. An on-line gas control system using an artificial intelligence language: PROLOG II

    International Nuclear Information System (INIS)

    Lai, C.

    1990-01-01

    An application of Artificial Intelligence to a real physics experiment is presented. This allows comparison with classical programming techniques. The PROLOG language appears as a convenient on-line language, easily interfaced to the low level service routines, for which algorithmic languages can still be used. Steering modules have been written for a gas acquisition and analysis program, and for a control system with graphic human interface. This system includes safety rules and automatic action sequences

  16. The chaotic global best artificial bee colony algorithm for the multi-area economic/emission dispatch

    International Nuclear Information System (INIS)

    Secui, Dinu Calin

    2015-01-01

    This paper suggests a chaotic optimizing method, based on the GBABC (global best artificial bee colony algorithm), where the random sequences used in updating the solutions of this algorithm are replaced with chaotic sequences generated by chaotic maps. The new algorithm, called chaotic CGBABC (global best artificial bee colony algorithm), is used to solving the multi-area economic/emission dispatch problem taking into consideration the valve-point effects, the transmission line losses, multi-fuel sources, prohibited operating zones, tie line capacity and power transfer cost between different areas of the system. The behaviour of the CGBABC algorithm is studied considering ten chaotic maps both one-dimensional and bi-dimensional, with various probability density functions. The CGBABC algorithm's performance including a variety of chaotic maps is tested on five systems (6-unit, 10-unit, 16-unit, 40-unit and 120-unit) with different characteristics, constraints and sizes. The results comparison highlights a hierarchy in the chaotic maps included in the CGBABC algorithm and shows that it performs better than the classical ABC algorithm, the GBABC algorithm and other optimization techniques. - Highlights: • A chaotic global best ABC algorithm (CGBABC) is presented. • CGBABC is applied for solving the multi-area economic/emission dispatch problem. • Valve-point effects, multi-fuel sources, POZ, transmission losses were considered. • The algorithm is tested on five systems having 6, 10, 16, 40 and 120 thermal units. • CGBABC algorithm outperforms several optimization techniques.

  17. Effects of defect pixel correction algorithms for x-ray detectors on image quality in planar projection and volumetric CT data sets

    International Nuclear Information System (INIS)

    Kuttig, Jan; Steiding, Christian; Hupfer, Martin; Karolczak, Marek; Kolditz, Daniel

    2015-01-01

    In this study we compared various defect pixel correction methods for reducing artifact appearance within projection images used for computed tomography (CT) reconstructions.Defect pixel correction algorithms were examined with respect to their artifact behaviour within planar projection images as well as in volumetric CT reconstructions. We investigated four algorithms: nearest neighbour, linear and adaptive linear interpolation, and a frequency-selective spectral-domain approach.To characterise the quality of each algorithm in planar image data, we inserted line defects of varying widths and orientations into images. The structure preservation of each algorithm was analysed by corrupting and correcting the image of a slit phantom pattern and by evaluating its line spread function (LSF). The noise preservation was assessed by interpolating corrupted flat images and estimating the noise power spectrum (NPS) of the interpolated region.For the volumetric investigations, we examined the structure and noise preservation within a structured aluminium foam, a mid-contrast cone-beam phantom and a homogeneous Polyurethane (PUR) cylinder.The frequency-selective algorithm showed the best structure and noise preservation for planar data of the correction methods tested. For volumetric data it still showed the best noise preservation, whereas the structure preservation was outperformed by the linear interpolation.The frequency-selective spectral-domain approach in the correction of line defects is recommended for planar image data, but its abilities within high-contrast volumes are restricted. In that case, the application of a simple linear interpolation might be the better choice to correct line defects within projection images used for CT. (paper)

  18. Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines

    Directory of Open Access Journals (Sweden)

    Ibrahim Baz

    2008-04-01

    Full Text Available This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction, for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD, indicated that the model can successfully vectorize the specified raster data quickly and accurately.

  19. An accelerated line-by-line option for MODTRAN combining on-the-fly generation of line center absorption within 0.1 cm-1 bins and pre-computed line tails

    Science.gov (United States)

    Berk, Alexander; Conforti, Patrick; Hawes, Fred

    2015-05-01

    A Line-By-Line (LBL) option is being developed for MODTRAN6. The motivation for this development is two-fold. Firstly, when MODTRAN is validated against an independent LBL model, it is difficult to isolate the source of discrepancies. One must verify consistency between pressure, temperature and density profiles, between column density calculations, between continuum and particulate data, between spectral convolution methods, and more. Introducing a LBL option directly within MODTRAN will insure common elements for all calculations other than those used to compute molecular transmittances. The second motivation for the LBL upgrade is that it will enable users to compute high spectral resolution transmittances and radiances for the full range of current MODTRAN applications. In particular, introducing the LBL feature into MODTRAN will enable first-principle calculations of scattered radiances, an option that is often not readily available with LBL models. MODTRAN will compute LBL transmittances within one 0.1 cm-1 spectral bin at a time, marching through the full requested band pass. The LBL algorithm will use the highly accurate, pressure- and temperature-dependent MODTRAN Padé approximant fits of the contribution from line tails to define the absorption from all molecular transitions centered more than 0.05 cm-1 from each 0.1 cm-1 spectral bin. The beauty of this approach is that the on-the-fly computations for each 0.1 cm-1 bin will only require explicit LBL summing of transitions centered within a 0.2 cm-1 spectral region. That is, the contribution from the more distant lines will be pre-computed via the Padé approximants. The status of the LBL effort will be presented. This will include initial thermal and solar radiance calculations, validation calculations, and self-validations of the MODTRAN band model against its own LBL calculations.

  20. YF22 Model With On-Board On-Line Learning Microprocessors-Based Neural Algorithms for Autopilot and Fault-Tolerant Flight Control Systems

    National Research Council Canada - National Science Library

    Napolitano, Marcello

    2002-01-01

    This project focused on investigating the potential of on-line learning 'hardware-based' neural approximators and controllers to provide fault tolerance capabilities following sensor and actuator failures...

  1. Visual identification and similarity measures used for on-line motion planning of autonomous robots in unknown environments

    Science.gov (United States)

    Martínez, Fredy; Martínez, Fernando; Jacinto, Edwar

    2017-02-01

    In this paper we propose an on-line motion planning strategy for autonomous robots in dynamic and locally observable environments. In this approach, we first visually identify geometric shapes in the environment by filtering images. Then, an ART-2 network is used to establish the similarity between patterns. The proposed algorithm allows that a robot establish its relative location in the environment, and define its navigation path based on images of the environment and its similarity to reference images. This is an efficient and minimalist method that uses the similarity of landmark view patterns to navigate to the desired destination. Laboratory tests on real prototypes demonstrate the performance of the algorithm.

  2. Optimal Energy Management, Location and Size for Stationary Energy Storage System in a Metro Line Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Huan Xia

    2015-10-01

    Full Text Available The installation of stationary super-capacitor energy storage system (ESS in metro systems can recycle the vehicle braking energy and improve the pantograph voltage profile. This paper aims to optimize the energy management, location, and size of stationary super-capacitor ESSes simultaneously and obtain the best economic efficiency and voltage profile of metro systems. Firstly, the simulation platform of an urban rail power supply system, which includes trains and super-capacitor energy storage systems, is established. Then, two evaluation functions from the perspectives of economic efficiency and voltage drop compensation are put forward. Ultimately, a novel optimization method that combines genetic algorithms and a simulation platform of urban rail power supply system is proposed, which can obtain the best energy management strategy, location, and size for ESSes simultaneously. With actual parameters of a Chinese metro line applied in the simulation comparison, certain optimal scheme of ESSes’ energy management strategy, location, and size obtained by a novel optimization method can achieve much better performance of metro systems from the perspectives of two evaluation functions. The simulation result shows that with the increase of weight coefficient, the optimal energy management strategy, locations and size of ESSes appear certain regularities, and the best compromise between economic efficiency and voltage drop compensation can be obtained by a novel optimization method, which can provide a valuable reference to subway company.

  3. Research on personalized recommendation algorithm based on spark

    Science.gov (United States)

    Li, Zeng; Liu, Yu

    2018-04-01

    With the increasing amount of data in the past years, the traditional recommendation algorithm has been unable to meet people's needs. Therefore, how to better recommend their products to users of interest, become the opportunities and challenges of the era of big data development. At present, each platform enterprise has its own recommendation algorithm, but how to make efficient and accurate push information is still an urgent problem for personalized recommendation system. In this paper, a hybrid algorithm based on user collaborative filtering and content-based recommendation algorithm is proposed on Spark to improve the efficiency and accuracy of recommendation by weighted processing. The experiment shows that the recommendation under this scheme is more efficient and accurate.

  4. A Bilevel Programming Model to Optimize Train Operation Based on Satisfaction for an Intercity Rail Line

    Directory of Open Access Journals (Sweden)

    Zhipeng Huang

    2014-01-01

    Full Text Available The passenger travel demands for intercity rail lines fluctuate obviously during different time periods, which makes the rail departments unable to establish an even train operation scheme. This paper considers an optimization problem for train operations which respond to passenger travel demands of different periods in intercity rail lines. A satisfactory function of passenger travelling is proposed by means of analyzing the passengers’ travel choice behavior and correlative influencing factors. On this basis, the paper formulates a bilevel programming model which maximizes interests of railway enterprises and travelling satisfaction of each passenger. The trains operation in different periods can be optimized through upper layer planning of the model, while considering the passenger flow distribution problem based on the Wardrop user equilibrium principle in the lower layer planning. Then, a genetic algorithm is designed according to model features for solving the upper laying. The Frank-Wolfe algorithm is used for solving the lower layer planning. Finally, a numerical example is provided to demonstrate the application of the method proposed in this paper.

  5. On-line efficiency optimization of a synchronous reluctance motor

    Energy Technology Data Exchange (ETDEWEB)

    Lubin, Thierry; Razik, Hubert; Rezzoug, Abderrezak [Groupe de Recherche en Electrotechnique et Electronique de Nancy, GREEN, CNRS-UMR 7037, Universite Henri Poincare, BP 239, 54506 Vandoeuvre-les-Nancy Cedex (France)

    2007-04-15

    This paper deals with an on-line optimum-efficiency control of a synchronous reluctance motor drive. The input power minimization control is implemented with a search controller using Fibonacci search algorithm. It searches the optimal reference value of the d-axis stator current for which the input power is minimum. The input power is calculated from the measured dc-bus current and dc-bus voltage of the inverter. A rotor-oriented vector control of the synchronous reluctance machine with the optimization efficiency controller is achieved with a DSP board (TMS302C31). Experimental results are presented to validate the proposed control methods. It is shown that stability problems can appear during the search process. (author)

  6. Detection of vegetation LUE based on chlorophyll fluorescence separation algorithm from Fraunhofer line

    Science.gov (United States)

    Liu, Liangyun; Zhang, Bing

    2009-09-01

    Photosynthetic efficiency is very important, and not yet generally assessable by remote sensing. Much research has proved the possibility of the separation of solar-induced chlorophyll fluorescence (ChlF) from the reflected hyperspectral data. As the 'probe' of plant photosynthesis, it is possible to detect photosynthetic light use efficiency (LUE) by the separated solar-induced ChlF. A diurnal experiment was carried out on winter wheat on Apr. 18, 2008, and the canopy radiance spectra and leaf LUE data were measured synchronously. The solar-induced chlorophyll fluorescence signals at 760nm and 688nm were separated from the reflected radiance spectral based on Fraunhofer lines in two oxygen absorption bands. The result showed that LUE was negatively correlated to the separated chlorophyll signals. The statistical models for LUE based on the solar-induced chlorophyll fluorescence values at 688 nm and 760 nm bands had correlation coefficients (R2) of 0.64 and 0.78, respectively. In addition, photochemical reflectance index (PRI) was also linked to LUE, and a statistical model for LUE based on PRI has a correlation coefficient (R2) of 0.66. The presented method provides a novel solution for monitoring LUE from remote sensing data.

  7. Animated construction of line drawings

    KAUST Repository

    Fu, Hongbo

    2011-12-01

    Revealing the sketching sequence of a line drawing can be visually intriguing and used for video-based storytelling. Typically this is enabled based on tedious recording of artists\\' drawing process. We demonstrate that it is often possible to estimate a reasonable drawing order from a static line drawing with clearly defined shape geometry, which looks plausible to a human viewer. We map the key principles of drawing order from drawing cognition to computational procedures in our framework. Our system produces plausible animated constructions of input line drawings, with no or little user intervention. We test our algorithm on a range of input sketches, with varying degree of complexity and structure, and evaluate the results via a user study. We also present applications to gesture drawing synthesis and drawing animation creation especially in the context of video scribing.

  8. Animated construction of line drawings

    KAUST Repository

    Fu, Hongbo

    2011-12-01

    Revealing the sketching sequence of a line drawing can be visually intriguing and used for video-based storytelling. Typically this is enabled based on tedious recording of artists\\' drawing process. We demonstrate that it is often possible to estimate a reasonable drawing order from a static line drawing with clearly defined shape geometry, which looks plausible to a human viewer. We map the key principles of drawing order from drawing cognition to computational procedures in our framework. Our system produces plausible animated constructions of input line drawings, with no or little user intervention. We test our algorithm on a range of input sketches, with varying degree of complexity and structure, and evaluate the results via a user study. We also present applications to gesture drawing synthesis and drawing animation creation especially in the context of video scribing. © 2011 ACM.

  9. Smart Meter Data Analytics: Systems, Algorithms and Benchmarking

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Golab, Lukasz; Golab, Wojciech

    2016-01-01

    the proposed benchmark using five representative platforms: a traditional numeric computing platform (Matlab), a relational DBMS with a built-in machine learning toolkit (PostgreSQL/MADlib), a main-memory column store (“System C”), and two distributed data processing platforms (Hive and Spark/Spark Streaming......Smart electricity meters have been replacing conventional meters worldwide, enabling automated collection of fine-grained (e.g., every 15 minutes or hourly) consumption data. A variety of smart meter analytics algorithms and applications have been proposed, mainly in the smart grid literature......-line feature extraction and model building as well a framework for on-line anomaly detection that we propose. Second, since obtaining real smart meter data is difficult due to privacy issues, we present an algorithm for generating large realistic data sets from a small seed of real data. Third, we implement...

  10. Research on Airborne SAR Imaging Based on Esc Algorithm

    Science.gov (United States)

    Dong, X. T.; Yue, X. J.; Zhao, Y. H.; Han, C. M.

    2017-09-01

    Due to the ability of flexible, accurate, and fast obtaining abundant information, airborne SAR is significant in the field of Earth Observation and many other applications. Optimally the flight paths are straight lines, but in reality it is not the case since some portion of deviation from the ideal path is impossible to avoid. A small disturbance from the ideal line will have a major effect on the signal phase, dramatically deteriorating the quality of SAR images and data. Therefore, to get accurate echo information and radar images, it is essential to measure and compensate for nonlinear motion of antenna trajectories. By means of compensating each flying trajectory to its reference track, MOCO method corrects linear phase error and quadratic phase error caused by nonlinear antenna trajectories. Position and Orientation System (POS) data is applied to acquiring accuracy motion attitudes and spatial positions of antenna phase centre (APC). In this paper, extend chirp scaling algorithm (ECS) is used to deal with echo data of airborne SAR. An experiment is done using VV-Polarization raw data of C-band airborne SAR. The quality evaluations of compensated SAR images and uncompensated SAR images are done in the experiment. The former always performs better than the latter. After MOCO processing, azimuth ambiguity is declined, peak side lobe ratio (PSLR) effectively improves and the resolution of images is improved obviously. The result shows the validity and operability of the imaging process for airborne SAR.

  11. RESEARCH ON AIRBORNE SAR IMAGING BASED ON ESC ALGORITHM

    Directory of Open Access Journals (Sweden)

    X. T. Dong

    2017-09-01

    Full Text Available Due to the ability of flexible, accurate, and fast obtaining abundant information, airborne SAR is significant in the field of Earth Observation and many other applications. Optimally the flight paths are straight lines, but in reality it is not the case since some portion of deviation from the ideal path is impossible to avoid. A small disturbance from the ideal line will have a major effect on the signal phase, dramatically deteriorating the quality of SAR images and data. Therefore, to get accurate echo information and radar images, it is essential to measure and compensate for nonlinear motion of antenna trajectories. By means of compensating each flying trajectory to its reference track, MOCO method corrects linear phase error and quadratic phase error caused by nonlinear antenna trajectories. Position and Orientation System (POS data is applied to acquiring accuracy motion attitudes and spatial positions of antenna phase centre (APC. In this paper, extend chirp scaling algorithm (ECS is used to deal with echo data of airborne SAR. An experiment is done using VV-Polarization raw data of C-band airborne SAR. The quality evaluations of compensated SAR images and uncompensated SAR images are done in the experiment. The former always performs better than the latter. After MOCO processing, azimuth ambiguity is declined, peak side lobe ratio (PSLR effectively improves and the resolution of images is improved obviously. The result shows the validity and operability of the imaging process for airborne SAR.

  12. Statistical algorithm for automated signature analysis of power spectral density data

    International Nuclear Information System (INIS)

    Piety, K.R.

    1977-01-01

    A statistical algorithm has been developed and implemented on a minicomputer system for on-line, surveillance applications. Power spectral density (PSD) measurements on process signals are the performance signatures that characterize the ''health'' of the monitored equipment. Statistical methods provide a quantitative basis for automating the detection of anomalous conditions. The surveillance algorithm has been tested on signals from neutron sensors, proximeter probes, and accelerometers to determine its potential for monitoring nuclear reactors and rotating machinery

  13. A Streaming Distance Transform Algorithm for Neighborhood-Sequence Distances

    Directory of Open Access Journals (Sweden)

    Nicolas Normand

    2014-09-01

    Full Text Available We describe an algorithm that computes a “translated” 2D Neighborhood-Sequence Distance Transform (DT using a look up table approach. It requires a single raster scan of the input image and produces one line of output for every line of input. The neighborhood sequence is specified either by providing one period of some integer periodic sequence or by providing the rate of appearance of neighborhoods. The full algorithm optionally derives the regular (centered DT from the “translated” DT, providing the result image on-the-fly, with a minimal delay, before the input image is fully processed. Its efficiency can benefit all applications that use neighborhood- sequence distances, particularly when pipelined processing architectures are involved, or when the size of objects in the source image is limited.

  14. FAST (Four chamber view And Swing Technique) Echo: a Novel and Simple Algorithm to Visualize Standard Fetal Echocardiographic Planes

    Science.gov (United States)

    Yeo, Lami; Romero, Roberto; Jodicke, Cristiano; Oggè, Giovanna; Lee, Wesley; Kusanovic, Juan Pedro; Vaisbuch, Edi; Hassan, Sonia S.

    2010-01-01

    Objective To describe a novel and simple algorithm (FAST Echo: Four chamber view And Swing Technique) to visualize standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). Methods We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) “swings” through the ductal arch image (“swing technique”), providing an infinite number of cardiac planes in sequence. Each line generated the following plane(s): 1) Line 1: three-vessels and trachea view; 2) Line 2: five-chamber view and long axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); 3) Line 3: four-chamber view; and 4) “Swing” line: three-vessels and trachea view, five-chamber view and/or long axis view of the aorta, four-chamber view, and stomach. The algorithm was then tested in 50 normal hearts (15.3 – 40 weeks of gestation) and visualization rates for cardiac diagnostic planes were calculated. To determine if the algorithm could identify planes that departed from the normal images, we tested the algorithm in 5 cases with proven congenital heart defects. Results In normal cases, the FAST Echo algorithm (3 locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long axis view of the aorta, four-chamber view): 1) individually in 100% of cases [except for the three-vessel and trachea view, which was seen in 98% (49/50)]; and 2) simultaneously in 98% (49/50). The “swing technique” was able to generate the three-vessels and trachea view, five

  15. Four-chamber view and 'swing technique' (FAST) echo: a novel and simple algorithm to visualize standard fetal echocardiographic planes.

    Science.gov (United States)

    Yeo, L; Romero, R; Jodicke, C; Oggè, G; Lee, W; Kusanovic, J P; Vaisbuch, E; Hassan, S

    2011-04-01

    To describe a novel and simple algorithm (four-chamber view and 'swing technique' (FAST) echo) for visualization of standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) 'swings' through the ductal arch image (swing technique), providing an infinite number of cardiac planes in sequence. Each line generates the following plane(s): (a) Line 1: three-vessels and trachea view; (b) Line 2: five-chamber view and long-axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); (c) Line 3: four-chamber view; and (d) 'swing line': three-vessels and trachea view, five-chamber view and/or long-axis view of the aorta, four-chamber view and stomach. The algorithm was then tested in 50 normal hearts in fetuses at 15.3-40 weeks' gestation and visualization rates for cardiac diagnostic planes were calculated. To determine whether the algorithm could identify planes that departed from the normal images, we tested the algorithm in five cases with proven congenital heart defects. In normal cases, the FAST echo algorithm (three locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long-axis view of the aorta, four-chamber view) individually in 100% of cases (except for the three-vessels and trachea view, which was seen in 98% (49/50)) and simultaneously in 98% (49/50). The swing technique was able to generate the three-vessels and trachea view, five-chamber view and/or long

  16. Feasibility study of the iterative x-ray phase retrieval algorithm

    International Nuclear Information System (INIS)

    Meng Fanbo; Liu Hong; Wu Xizeng

    2009-01-01

    An iterative phase retrieval algorithm was previously investigated for in-line x-ray phase imaging. Through detailed theoretical analysis and computer simulations, we now discuss the limitations, robustness, and efficiency of the algorithm. The iterative algorithm was proved robust against imaging noise but sensitive to the variations of several system parameters. It is also efficient in terms of calculation time. It was shown that the algorithm can be applied to phase retrieval based on one phase-contrast image and one attenuation image, or two phase-contrast images; in both cases, the two images can be obtained either by one detector in two exposures, or by two detectors in only one exposure as in the dual-detector scheme

  17. Research on Flow Field Perception Based on Artificial Lateral Line Sensor System

    Directory of Open Access Journals (Sweden)

    Guijie Liu

    2018-03-01

    Full Text Available In nature, the lateral line of fish is a peculiar and important organ for sensing the surrounding hydrodynamic environment, preying, escaping from predators and schooling. In this paper, by imitating the mechanism of fish lateral canal neuromasts, we developed an artificial lateral line system composed of micro-pressure sensors. Through hydrodynamic simulations, an optimized sensor structure was obtained and the pressure distribution models of the lateral surface were established in uniform flow and turbulent flow. Carrying out the corresponding underwater experiment, the validity of the numerical simulation method is verified by the comparison between the experimental data and the simulation results. In addition, a variety of effective research methods are proposed and validated for the flow velocity estimation and attitude perception in turbulent flow, respectively and the shape recognition of obstacles is realized by the neural network algorithm.

  18. Applying genetic algorithms for programming manufactoring cell tasks

    Directory of Open Access Journals (Sweden)

    Efredy Delgado

    2005-05-01

    Full Text Available This work was aimed for developing computational intelligence for scheduling a manufacturing cell's tasks, based manily on genetic algorithms. The manufacturing cell was modelled as beign a production-line; the makespan was calculated by using heuristics adapted from several libraries for genetic algorithms computed in C++ builder. Several problems dealing with small, medium and large list of jobs and machinery were resolved. The results were compared with other heuristics. The approach developed here would seem to be promising for future research concerning scheduling manufacturing cell tasks involving mixed batches.

  19. Acoustic change detection algorithm using an FM radio

    Science.gov (United States)

    Goldman, Geoffrey H.; Wolfe, Owen

    2012-06-01

    The U.S. Army is interested in developing low-cost, low-power, non-line-of-sight sensors for monitoring human activity. One modality that is often overlooked is active acoustics using sources of opportunity such as speech or music. Active acoustics can be used to detect human activity by generating acoustic images of an area at different times, then testing for changes among the imagery. A change detection algorithm was developed to detect physical changes in a building, such as a door changing positions or a large box being moved using acoustics sources of opportunity. The algorithm is based on cross correlating the acoustic signal measured from two microphones. The performance of the algorithm was shown using data generated with a hand-held FM radio as a sound source and two microphones. The algorithm could detect a door being opened in a hallway.

  20. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  1. On benchmarking Stochastic Global Optimization Algorithms

    NARCIS (Netherlands)

    Hendrix, E.M.T.; Lancinskas, A.

    2015-01-01

    A multitude of heuristic stochastic optimization algorithms have been described in literature to obtain good solutions of the box-constrained global optimization problem often with a limit on the number of used function evaluations. In the larger question of which algorithms behave well on which

  2. Joint Data Assimilation and Parameter Calibration in on-line groundwater modelling using Sequential Monte Carlo techniques

    Science.gov (United States)

    Ramgraber, M.; Schirmer, M.

    2017-12-01

    As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S

  3. High Precision Edge Detection Algorithm for Mechanical Parts

    Science.gov (United States)

    Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui

    2018-04-01

    High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  4. Research on retailer data clustering algorithm based on Spark

    Science.gov (United States)

    Huang, Qiuman; Zhou, Feng

    2017-03-01

    Big data analysis is a hot topic in the IT field now. Spark is a high-reliability and high-performance distributed parallel computing framework for big data sets. K-means algorithm is one of the classical partition methods in clustering algorithm. In this paper, we study the k-means clustering algorithm on Spark. Firstly, the principle of the algorithm is analyzed, and then the clustering analysis is carried out on the supermarket customers through the experiment to find out the different shopping patterns. At the same time, this paper proposes the parallelization of k-means algorithm and the distributed computing framework of Spark, and gives the concrete design scheme and implementation scheme. This paper uses the two-year sales data of a supermarket to validate the proposed clustering algorithm and achieve the goal of subdividing customers, and then analyze the clustering results to help enterprises to take different marketing strategies for different customer groups to improve sales performance.

  5. Hybrid employment recommendation algorithm based on Spark

    Science.gov (United States)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  6. A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.

    Science.gov (United States)

    Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng

    2017-09-08

    Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.

  7. Optimal parallel algorithms for problems modeled by a family of intervals

    Science.gov (United States)

    Olariu, Stephan; Schwing, James L.; Zhang, Jingyuan

    1992-01-01

    A family of intervals on the real line provides a natural model for a vast number of scheduling and VLSI problems. Recently, a number of parallel algorithms to solve a variety of practical problems on such a family of intervals have been proposed in the literature. Computational tools are developed, and it is shown how they can be used for the purpose of devising cost-optimal parallel algorithms for a number of interval-related problems including finding a largest subset of pairwise nonoverlapping intervals, a minimum dominating subset of intervals, along with algorithms to compute the shortest path between a pair of intervals and, based on the shortest path, a parallel algorithm to find the center of the family of intervals. More precisely, with an arbitrary family of n intervals as input, all algorithms run in O(log n) time using O(n) processors in the EREW-PRAM model of computation.

  8. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  9. Channel coding for underwater acoustic single-carrier CDMA communication system

    Science.gov (United States)

    Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong

    2017-01-01

    CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.

  10. Application of 2 mm thin-slice scanning with bone algorithm on conventional CT in diagnosis of the pulmonary diseases

    International Nuclear Information System (INIS)

    Zhang Xianheng; Li Xiuhua; Wang Fenghua

    2004-01-01

    Objective: To evaluate the value of 2 mm thin-slice conventional CT scan with bone algorithm in diagnosis and differential diagnosis in the pulmonary diseases. Methods: In total 135 cases of the pulmonary diseases were routinely scanned by conventional scan, 10 mm per slice, with standard algorithm, then the 2 mm thin-slice scan with bone algorithm was performed at the interested region of the lungs. Result: According to the comparative study of the CT signs between 10 mm slice scan with standard algorithm and 2 mm thin-slice scan with bone algorithm, the latter was better on displaying the pulmonary axial interstium, intralobular septum, subpleura lines, honeycombing, 2-5 mm nodulars and anomalies of bronchial wall. Conclusion: According to the study of 135 cases, 2 mm thin-slice scan with bone algorithm is superior to 10 mm slice scan with standard algorithm in demonstrating the pulmonary lesions. It has a similar value with high-resolution spiral CT in the diagnosis of the pulmonary solitary or diffuse nodules, pulmonary diffuse interstitial lesions and the lesions of the airway. It is practical and advisable in the community hospital

  11. The Psychopharmacology Algorithm Project at the Harvard South Shore Program: An Algorithm for Generalized Anxiety Disorder.

    Science.gov (United States)

    Abejuela, Harmony Raylen; Osser, David N

    2016-01-01

    This revision of previous algorithms for the pharmacotherapy of generalized anxiety disorder was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. Algorithms from 1999 and 2010 and associated references were reevaluated. Newer studies and reviews published from 2008-14 were obtained from PubMed and analyzed with a focus on their potential to justify changes in the recommendations. Exceptions to the main algorithm for special patient populations, such as women of childbearing potential, pregnant women, the elderly, and those with common medical and psychiatric comorbidities, were considered. Selective serotonin reuptake inhibitors (SSRIs) are still the basic first-line medication. Early alternatives include duloxetine, buspirone, hydroxyzine, pregabalin, or bupropion, in that order. If response is inadequate, then the second recommendation is to try a different SSRI. Additional alternatives now include benzodiazepines, venlafaxine, kava, and agomelatine. If the response to the second SSRI is unsatisfactory, then the recommendation is to try a serotonin-norepinephrine reuptake inhibitor (SNRI). Other alternatives to SSRIs and SNRIs for treatment-resistant or treatment-intolerant patients include tricyclic antidepressants, second-generation antipsychotics, and valproate. This revision of the GAD algorithm responds to issues raised by new treatments under development (such as pregabalin) and organizes the evidence systematically for practical clinical application.

  12. Research on AHP decision algorithms based on BP algorithm

    Science.gov (United States)

    Ma, Ning; Guan, Jianhe

    2017-10-01

    Decision making is the thinking activity that people choose or judge, and scientific decision-making has always been a hot issue in the field of research. Analytic Hierarchy Process (AHP) is a simple and practical multi-criteria and multi-objective decision-making method that combines quantitative and qualitative and can show and calculate the subjective judgment in digital form. In the process of decision analysis using AHP method, the rationality of the two-dimensional judgment matrix has a great influence on the decision result. However, in dealing with the real problem, the judgment matrix produced by the two-dimensional comparison is often inconsistent, that is, it does not meet the consistency requirements. BP neural network algorithm is an adaptive nonlinear dynamic system. It has powerful collective computing ability and learning ability. It can perfect the data by constantly modifying the weights and thresholds of the network to achieve the goal of minimizing the mean square error. In this paper, the BP algorithm is used to deal with the consistency of the two-dimensional judgment matrix of the AHP.

  13. Study on a low complexity adaptive modulation algorithm in OFDM-ROF system with sub-carrier grouping technology

    Science.gov (United States)

    Liu, Chong-xin; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Tian, Qing-hua; Tian, Feng; Wang, Yong-jun; Rao, Lan; Mao, Yaya; Li, Deng-ao

    2018-01-01

    During the last decade, the orthogonal frequency division multiplexing radio-over-fiber (OFDM-ROF) system with adaptive modulation technology is of great interest due to its capability of raising the spectral efficiency dramatically, reducing the effects of fiber link or wireless channel, and improving the communication quality. In this study, according to theoretical analysis of nonlinear distortion and frequency selective fading on the transmitted signal, a low-complexity adaptive modulation algorithm is proposed in combination with sub-carrier grouping technology. This algorithm achieves the optimal performance of the system by calculating the average combined signal-to-noise ratio of each group and dynamically adjusting the origination modulation format according to the preset threshold and user's requirements. At the same time, this algorithm takes the sub-carrier group as the smallest unit in the initial bit allocation and the subsequent bit adjustment. So, the algorithm complexity is only 1 /M (M is the number of sub-carriers in each group) of Fischer algorithm, which is much smaller than many classic adaptive modulation algorithms, such as Hughes-Hartogs algorithm, Chow algorithm, and is in line with the development direction of green and high speed communication. Simulation results show that the performance of OFDM-ROF system with the improved algorithm is much better than those without adaptive modulation, and the BER of the former achieves 10e1 to 10e2 times lower than the latter when SNR values gets larger. We can obtain that this low complexity adaptive modulation algorithm is extremely useful for the OFDM-ROF system.

  14. Study on recognition algorithm for paper currency numbers based on neural network

    Science.gov (United States)

    Li, Xiuyan; Liu, Tiegen; Li, Yuanyao; Zhang, Zhongchuan; Deng, Shichao

    2008-12-01

    Based on the unique characteristic, the paper currency numbers can be put into record and the automatic identification equipment for paper currency numbers is supplied to currency circulation market in order to provide convenience for financial sectors to trace the fiduciary circulation socially and provide effective supervision on paper currency. Simultaneously it is favorable for identifying forged notes, blacklisting the forged notes numbers and solving the major social problems, such as armor cash carrier robbery, money laundering. For the purpose of recognizing the paper currency numbers, a recognition algorithm based on neural network is presented in the paper. Number lines in original paper currency images can be draw out through image processing, such as image de-noising, skew correction, segmentation, and image normalization. According to the different characteristics between digits and letters in serial number, two kinds of classifiers are designed. With the characteristics of associative memory, optimization-compute and rapid convergence, the Discrete Hopfield Neural Network (DHNN) is utilized to recognize the letters; with the characteristics of simple structure, quick learning and global optimum, the Radial-Basis Function Neural Network (RBFNN) is adopted to identify the digits. Then the final recognition results are obtained by combining the two kinds of recognition results in regular sequence. Through the simulation tests, it is confirmed by simulation results that the recognition algorithm of combination of two kinds of recognition methods has such advantages as high recognition rate and faster recognition simultaneously, which is worthy of broad application prospect.

  15. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Chong Fan

    2017-03-01

    Full Text Available A sub-block algorithm is usually applied in the super-resolution (SR reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line.

  16. Grounding Lines Detecting Using LANDSAT8 Oli and CRYOSAT-2 Data Fusion

    Science.gov (United States)

    Li, F.; Guo, Y.; Zhang, Y.; Zhang, S.

    2018-04-01

    The grounding zone is the region where ice transitions from grounded ice sheet to freely floating ice shelf, grounding lines are actually more of a zone, typically over several kilometers. The mass loss from Antarctica is strongly linked to changes in the ice shelves and their grounding lines, since the variation in the grounding line can result in very rapid changes in glacier and ice-shelf behavior. Based on remote sensing observations, five global Antarctic grounding line products have been released internationally, including MOA, ASAID, ICESat, MEaSUREs, and Synthesized grounding lines. However, the five products could not provide the annual grounding line products of the whole Antarctic, even some products have stopped updating, which limits the time series analysis of Antarctic material balance to a certain extent. Besides, the accurate of single remote-sensing data based grounding line products is far from satisficed. Therefore, we use algorithms to extract grounding lines with SAR and Cryosat-2 data respectively, and combine the results of two kinds of grounding lines to obtain new products, we obtain a mature grounding line extraction algorithm process, so that we can realize the extraction of grounding line of the Antarctic each year in the future. The comparison between fusion results and the MOA product results indicate that there is a maximum deviation of 188.67 meters between the MOA product and the fusion result.

  17. Adaptive on-line prediction of the available power of lithium-ion batteries

    Science.gov (United States)

    Waag, Wladislaw; Fleischer, Christian; Sauer, Dirk Uwe

    2013-11-01

    In this paper a new approach for prediction of the available power of a lithium-ion battery pack is presented. It is based on a nonlinear battery model that includes current dependency of the battery resistance. It results in an accurate power prediction not only at room temperature, but also at lower temperatures at which the current dependency is substantial. The used model parameters are fully adaptable on-line to the given state of the battery (state of charge, state of health, temperature). This on-line adaption in combination with an explicit consideration of differences between characteristics of individual cells in a battery pack ensures an accurate power prediction under all possible conditions. The proposed trade-off between the number of used cell parameters and the total accuracy as well as the optimized algorithm results in a real-time capability of the method, which is demonstrated on a low-cost 16 bit microcontroller. The verification tests performed on a software-in-the-loop test bench system with four 40 Ah lithium-ion cells show promising results.

  18. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.

    Science.gov (United States)

    Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G

    2016-11-02

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.

  19. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  20. MVDR Algorithm Based on Estimated Diagonal Loading for Beamforming

    Directory of Open Access Journals (Sweden)

    Yuteng Xiao

    2017-01-01

    Full Text Available Beamforming algorithm is widely used in many signal processing fields. At present, the typical beamforming algorithm is MVDR (Minimum Variance Distortionless Response. However, the performance of MVDR algorithm relies on the accurate covariance matrix. The MVDR algorithm declines dramatically with the inaccurate covariance matrix. To solve the problem, studying the beamforming array signal model and beamforming MVDR algorithm, we improve MVDR algorithm based on estimated diagonal loading for beamforming. MVDR optimization model based on diagonal loading compensation is established and the interval of the diagonal loading compensation value is deduced on the basis of the matrix theory. The optimal diagonal loading value in the interval is also determined through the experimental method. The experimental results show that the algorithm compared with existing algorithms is practical and effective.

  1. Line outage contingency analysis including the system islanding scenario

    Energy Technology Data Exchange (ETDEWEB)

    Hazarika, D.; Bhuyan, S. [Assam Engineering College, Jalukbari, Guwahati 781013 (India); Chowdhury, S.P. [Jadavpur University, Jadavpur, Kolkata 700 032 (India)

    2006-05-15

    The paper describes an algorithm for determining the line outage contingency of a line taking into account of line over load effect in remaining lines and subsequent tripping of over loaded line(s) leading to possible system split or islanding of a power system. The optimally ordered sparse [B'], [B'] matrices for the integrated system are used for load flow analysis to determine modified values of voltage phase angles [{delta}] and bus voltages [V] to determine the over loading effect on the remaining lines due to outage of a selected line outage contingency. In case of over loading in remaining line(s), the over loaded lines are removed from the system and a topology processor is used to find the islands. A fast decoupled load flow (FDLF) analysis is carried out for finding out the system variables for the islanded (or single island) system by incorporating appropriate modification in the [B'] and [B'] matrices of the integrated system. Line outage indices based on line overload, loss of load, loss of generation and static voltage stability are computed to indicate severity of a line outage of a selected line. (author)

  2. Optimized coincidence Doppler broadening spectroscopy using deconvolution algorithms

    International Nuclear Information System (INIS)

    Ho, K.F.; Ching, H.M.; Cheng, K.W.; Beling, C.D.; Fung, S.; Ng, K.P.

    2004-01-01

    In the last few years a number of excellent deconvolution algorithms have been developed for use in ''de-blurring'' 2D images. Here we report briefly on one such algorithm we have studied which uses the non-negativity constraint to optimize the regularization and which is applied to the 2D image like data produced in Coincidence Doppler Broadening Spectroscopy (CDBS). The system instrumental resolution functions are obtained using the 514 keV line from 85 Sr. The technique when applied to a series of well annealed polycrystalline metals gives two photon momentum data on a quality comparable to that obtainable using 1D Angular Correlation of Annihilation Radiation (ACAR). (orig.)

  3. R and D study on on-line criticality surveillance system (IV)

    International Nuclear Information System (INIS)

    Yamada, Sumasu

    2000-02-01

    Developing an inexpensive on-line criticality surveillance system is required for ensuring the safety of nuclear fuel reprocessing plants. Based on the series of researches for five years, R and D study on On-line Criticality Surveillance system has been carried out since 1996. The concept of this Criticality Surveillance System is based on the Auto-Regressive Moving Average (ARMA) model identification algorithms to the time series of signal fluctuation of a neutron detector. We have proposed several new ideas of modification to the original design of the Criticality Surveillance System, and also reported some results of numerical analysis over the DCA experiments. In those days, DOS/V personal computers with Microsoft Windows have came into wide use instead of those based on the MS-DOS, which have been popular in Japan. NEC, a major maker of MS-DOS computers, stopped the production of MS-DOS computers and changed their management policy toward production of DOS/V personal computers. Our researches have been developed using MS-DOS computers. For the effective use of these important results, it became an urgent theme to transplant all programs developed on MS-DOS computers into computers with the OS, which is not easily affected by commercialism. Since the design concept should be based on high reliability, electromagnetic disturbance-free and high expandability, and also computers have achieved remarkably high performance as well as low price in these days, these computers should be used not only as a simple signal processing unit but also a totally integrated signal analyzing system along with conventional signal analyzing software in stead of IC chips with analyzing soft wares. This configuration enables us to easily introduce newly developed techniques and to provide supplement information. Then, this approach can enhance the reliability of the Criticality Surveillance System without addition of any special devices, and also provide the flexibility of the system

  4. An algorithm for online optimization of accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xiaobiao [SLAC National Accelerator Lab., Menlo Park, CA (United States); Corbett, Jeff [SLAC National Accelerator Lab., Menlo Park, CA (United States); Safranek, James [SLAC National Accelerator Lab., Menlo Park, CA (United States); Wu, Juhao [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2013-10-01

    We developed a general algorithm for online optimization of accelerator performance, i.e., online tuning, using the performance measure as the objective function. This method, named robust conjugate direction search (RCDS), combines the conjugate direction set approach of Powell's method with a robust line optimizer which considers the random noise in bracketing the minimum and uses parabolic fit of data points that uniformly sample the bracketed zone. Moreover, it is much more robust against noise than traditional algorithms and is therefore suitable for online application. Simulation and experimental studies have been carried out to demonstrate the strength of the new algorithm.

  5. Simple sorting algorithm test based on CUDA

    OpenAIRE

    Meng, Hongyu; Guo, Fangjin

    2015-01-01

    With the development of computing technology, CUDA has become a very important tool. In computer programming, sorting algorithm is widely used. There are many simple sorting algorithms such as enumeration sort, bubble sort and merge sort. In this paper, we test some simple sorting algorithm based on CUDA and draw some useful conclusions.

  6. Self-Tuning Insulin Adjustment Algorithm for Type 1 Diabetic Patients based on Multi-Doses Regime

    Directory of Open Access Journals (Sweden)

    D. U. Campos-Delgado

    2005-01-01

    Full Text Available A self-tuning algorithm is presented for on-line insulin dosage adjustment in type 1 diabetic patients (chronic stage. The algorithm suggested does not need information of the patient insulin–glucose dynamics (model-free. Three doses are programmed daily, where a combination of two types of insulin: rapid/short and intermediate/long acting is injected into the patient through a subcutaneous route. The doses adaptation is performed by reducing the error in the blood glucose level from euglycemics. In this way, a total of five doses are tuned per day: three rapid/short and two intermediate/long, where there is large penalty to avoid hypoglycemic scenarios. Closed-loop simulation results are illustrated using a detailed nonlinear model of the subcutaneous insulin–glucose dynamics in a type 1 diabetic patient with meal intake.

  7. Improving the Energy Market: Algorithms, Market Implications, and Transmission Switching

    Science.gov (United States)

    Lipka, Paula Ann

    This dissertation aims to improve ISO operations through a better real-time market solution algorithm that directly considers both real and reactive power, finds a feasible Alternating Current Optimal Power Flow solution, and allows for solving transmission switching problems in an AC setting. Most of the IEEE systems do not contain any thermal limits on lines, and the ones that do are often not binding. Chapter 3 modifies the thermal limits for the IEEE systems to create new, interesting test cases. Algorithms created to better solve the power flow problem often solve the IEEE cases without line limits. However, one of the factors that makes the power flow problem hard is thermal limits on the lines. The transmission networks in practice often have transmission lines that become congested, and it is unrealistic to ignore line limits. Modifying the IEEE test cases makes it possible for other researchers to be able to test their algorithms on a setup that is closer to the actual ISO setup. This thesis also examines how to convert limits given on apparent power---as is in the case in the Polish test systems---to limits on current. The main consideration in setting line limits is temperature, which linearly relates to current. Setting limits on real or apparent power is actually a proxy for using the limits on current. Therefore, Chapter 3 shows how to convert back to the best physical representation of line limits. A sequential linearization of the current-voltage formulation of the Alternating Current Optimal Power Flow (ACOPF) problem is used to find an AC-feasible generator dispatch. In this sequential linearization, there are parameters that are set to the previous optimal solution. Additionally, to improve accuracy of the Taylor series approximations that are used, the movement of the voltage is restricted. The movement of the voltage is allowed to be very large at the first iteration and is restricted further on each subsequent iteration, with the restriction

  8. DISCRETIZATION APPROACH USING RAY-TESTING MODEL IN PARTING LINE AND PARTING SURFACE GENERATION

    Institute of Scientific and Technical Information of China (English)

    HAN Jianwen; JIAN Bin; YAN Guangrong; LEI Yi

    2007-01-01

    Surface classification, 3D parting line, parting surface generation and demoldability analysis which is helpful to select optimal parting direction and optimal parting line are involved in automatic cavity design based on the ray-testing model. A new ray-testing approach is presented to classify the part surfaces to core/cavity surfaces and undercut surfaces by automatic identifying the visibility of surfaces. A simple, direct and efficient algorithm to identify surface visibility is developed. The algorithm is robust and adapted to rather complicated geometry, so it is valuable in computer-aided mold design systems. To validate the efficiency of the approach, an experimental program is implemented. Case studies show that the approach is practical and valuable in automatic parting line and parting surface generation.

  9. Generalized phase retrieval algorithm based on information measures

    OpenAIRE

    Shioya, Hiroyuki; Gohara, Kazutoshi

    2006-01-01

    An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.

  10. Evaluation of an on-line methodology for measuring volatile organic compounds (VOC) fluxes by eddy-covariance with a PTR-TOF-Qi-MS

    Science.gov (United States)

    Loubet, Benjamin; Buysse, Pauline; Lafouge, Florence; Ciuraru, Raluca; Decuq, Céline; Zurfluh, Olivier

    2017-04-01

    Field scale flux measurements of volatile organic compounds (VOC) are essential for improving our knowledge of VOC emissions from ecosystems. Many VOCs are emitted from and deposited to ecosystems. Especially less known, are crops which represent more than 50% of French terrestrial surfaces. In this study, we evaluate a new on-line methodology for measuring VOC fluxes by Eddy Covariance with a PTR-Qi-TOF-MS. Measurements were performed at the ICOS FR-GRI site over a crop using a 30 m long high flow rate sampling line and an ultrasonic anemometer. A Labview program was specially designed for acquisition and on-line covariance calculation: Whole mass spectra ( 240000 channels) were acquired on-line at 10 Hz and stored in a temporary memory. Every 5 minutes, the spectra were mass-calibrated and normalized by the primary ion peak integral at 10 Hz. The mass spectra peaks were then retrieved from the 5-min averaged spectra by withdrawing the baseline, determining the resolution and using a multiple-peak detection algorithm. In order to optimize the peak detection algorithm for the covariance, we determined the covariances as the integrals of the peaks of the vertical-air-velocity-fluctuation weighed-averaged-spectra. In other terms, we calculate , were w is the vertical component of the air velocity, Sp is the spectra, t is time, lag is the decorrelation lag time and denotes an average. The lag time was determined as the decorrelation time between w and the primary ion (at mass 21.022) which integrates the contribution of all reactions of VOC and water with the primary ion. Our algorithm was evaluated by comparing the exchange velocity of water vapor measured by an open path absorption spectroscopy instrument and the water cluster measured with the PTRQi-TOF-MS. The influence of the algorithm parameters and lag determination is discussed. This study was supported by the ADEME-CORTEA COV3ER project (http://www6.inra.fr/cov3er).

  11. A Compositional Sweep-Line State Space Exploration Method

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Mailund, Thomas

    2002-01-01

    State space exploration is a main approach to verification of finite-state systems. The sweep-line method exploits a certain kind of progress present in many systems to reduce peak memory usage during state space exploration. We present a new sweep-line algorithm for a compositional setting where...

  12. Automatic control algorithm effects on energy production

    Science.gov (United States)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  13. Development and Validation of a Spike Detection and Classification Algorithm Aimed at Implementation on Hardware Devices

    Directory of Open Access Journals (Sweden)

    E. Biffi

    2010-01-01

    Full Text Available Neurons cultured in vitro on MicroElectrode Array (MEA devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a statistical analysis on both simulated and real signal and (b Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems.

  14. A Compression Algorithm in Wireless Sensor Networks of Bearing Monitoring

    International Nuclear Information System (INIS)

    Zheng Bin; Meng Qingfeng; Wang Nan; Li Zhi

    2011-01-01

    The energy consumption of wireless sensor networks (WSNs) is always an important problem in the application of wireless sensor networks. This paper proposes a data compression algorithm to reduce amount of data and energy consumption during the data transmission process in the on-line WSNs-based bearing monitoring system. The proposed compression algorithm is based on lifting wavelets, Zerotree coding and Hoffman coding. Among of that, 5/3 lifting wavelets is used for dividing data into different frequency bands to extract signal characteristics. Zerotree coding is applied to calculate the dynamic thresholds to retain the attribute data. The attribute data are then encoded by Hoffman coding to further enhance the compression ratio. In order to validate the algorithm, simulation is carried out by using Matlab. The result of simulation shows that the proposed algorithm is very suitable for the compression of bearing monitoring data. The algorithm has been successfully used in online WSNs-based bearing monitoring system, in which TI DSP TMS320F2812 is used to realize the algorithm.

  15. Effects of indirect lightning strokes on lossy power distribution lines in the presence of nonlinear loads

    International Nuclear Information System (INIS)

    Rahimian, M. S.; Sadeghi, S. H. H.; Moini, R.

    2003-01-01

    Cloud-to-ground lightning strokes can include dangerous overvoltage on power distribution overhead lines. In this paper, a new algorithm is propagated within a distribution network including nonlinear apparatus. The coupling between the lightning channel and the overhead line is based on an antenna theory model and employs the method of moment for solving the governing electric field integral equation. The computed induced overvoltage is then used in the electromagnetic transient program to analyze its propagation within the distribution network. In this regard, the accuracy of the new coupling method is demonstrated by comparing the calculated induced over voltages using the new method and those obtained using the conventional methods. Simulation results are presented to show how the induced overvoltage is penetrated in a typical distribution network, consisting of overhead lines and underground cables, a distribution transformer protected by surge arresters and a three-phase resistive load

  16. Seizure detection algorithms based on EMG signals

    DEFF Research Database (Denmark)

    Conradsen, Isa

    Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective...... on the amplitude of the signal. The other algorithm was based on information of the signal in the frequency domain, and it focused on synchronisation of the electrical activity in a single muscle during the seizure. Results: The amplitude-based algorithm reliably detected seizures in 2 of the patients, while...... the frequency-based algorithm was efficient for detecting the seizures in the third patient. Conclusion: Our results suggest that EMG signals could be used to develop an automatic seizuredetection system. However, different patients might require different types of algorithms /approaches....

  17. Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm

    Directory of Open Access Journals (Sweden)

    Carolina Lagos

    2014-01-01

    Full Text Available Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP, the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric.

  18. Metal artifact reduction algorithm based on model images and spatial information

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jay [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Shih, Cheng-Ting [Department of Biomedical Engineering and Environmental Sciences, National Tsing-Hua University, Hsinchu, Taiwan (China); Chang, Shu-Jun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan (China); Huang, Tzung-Chi [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan (China); Sun, Jing-Yi [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Wu, Tung-Hsin, E-mail: tung@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, No.155, Sec. 2, Linong Street, Taipei 112, Taiwan (China)

    2011-10-01

    Computed tomography (CT) has become one of the most favorable choices for diagnosis of trauma. However, high-density metal implants can induce metal artifacts in CT images, compromising image quality. In this study, we proposed a model-based metal artifact reduction (MAR) algorithm. First, we built a model image using the k-means clustering technique with spatial information and calculated the difference between the original image and the model image. Then, the projection data of these two images were combined using an exponential weighting function. At last, the corrected image was reconstructed using the filter back-projection algorithm. Two metal-artifact contaminated images were studied. For the cylindrical water phantom image, the metal artifact was effectively removed. The mean CT number of water was improved from -28.95{+-}97.97 to -4.76{+-}4.28. For the clinical pelvic CT image, the dark band and the metal line were removed, and the continuity and uniformity of the soft tissue were recovered as well. These results indicate that the proposed MAR algorithm is useful for reducing metal artifact and could improve the diagnostic value of metal-artifact contaminated CT images.

  19. X-ray differential phase-contrast tomographic reconstruction with a phase line integral retrieval filter

    International Nuclear Information System (INIS)

    Fu, Jian; Hu, Xinhua; Li, Chen

    2015-01-01

    We report an alternative reconstruction technique for x-ray differential phase-contrast computed tomography (DPC-CT). This approach is based on a new phase line integral projection retrieval filter, which is rooted in the derivative property of the Fourier transform and counteracts the differential nature of the DPC-CT projections. It first retrieves the phase line integral from the DPC-CT projections. Then the standard filtered back-projection (FBP) algorithms popular in x-ray absorption-contrast CT are directly applied to the retrieved phase line integrals to reconstruct the DPC-CT images. Compared with the conventional DPC-CT reconstruction algorithms, the proposed method removes the Hilbert imaginary filter and allows for the direct use of absorption-contrast FBP algorithms. Consequently, FBP-oriented image processing techniques and reconstruction acceleration softwares that have already been successfully used in absorption-contrast CT can be directly adopted to improve the DPC-CT image quality and speed up the reconstruction

  20. On line photochemically induced excitation-emission-kinetic four-way data

    International Nuclear Information System (INIS)

    Jimenez Giron, A.; Duran-Meras, I.; Espinosa-Mansilla, A.; Munoz de la Pena, A.; Canada Canada, F.; Olivieri, A.C.

    2008-01-01

    The determination of folic acid and its two main serum metabolites, 5-methyltetrahydrofolic acid and tetrahydrofolic acid, has been accomplished using four-way data modelled by the third-order multivariate calibration methods unfolded and N-dimensional partial least-squares (U-PLS and N-PLS), in combination with the separate procedure known as residual trilinearization (RTL). The four-way data were acquired by following the photochemical reaction of these compounds by on line irradiation with a UV lamp. The excitation-emission matrices (EEMs) were recorded as a function of the irradiation time, using a fast scanning spectrofluorimeter. The method achieves selectivity from the different rates at which the corresponding photoproducts of the folic acid derivatives are formed and degraded. Several N-dimensional chemometric algorithms were used and the method was applied to the determination of these compounds in serum samples. The best algorithms to perform the multivariate calibration were U-PLS and N-PLS in combination with the separate residual trilinearization procedure, achieving the second-order advantage. The approach allows minimizing or eliminating traditionally time-consuming sample pre-treatments and can facilitate quantifying an analyte in its native environment

  1. High Precision Edge Detection Algorithm for Mechanical Parts

    Directory of Open Access Journals (Sweden)

    Duan Zhenyun

    2018-04-01

    Full Text Available High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  2. Flexible job-shop scheduling based on genetic algorithm and simulation validation

    Directory of Open Access Journals (Sweden)

    Zhou Erming

    2017-01-01

    Full Text Available This paper selects flexible job-shop scheduling problem as the research object, and Constructs mathematical model aimed at minimizing the maximum makespan. Taking the transmission reverse gear production line of a transmission corporation as an example, genetic algorithm is applied for flexible jobshop scheduling problem to get the specific optimal scheduling results with MATLAB. DELMIA/QUEST based on 3D discrete event simulation is applied to construct the physical model of the production workshop. On the basis of the optimal scheduling results, the logical link of the physical model for the production workshop is established, besides, importing the appropriate process parameters to make virtual simulation on the production workshop. Finally, through analyzing the simulated results, it shows that the scheduling results are effective and reasonable.

  3. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    Science.gov (United States)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  4. Integrating robust timetabling in line plan optimization for railway systems

    DEFF Research Database (Denmark)

    Burggraeve, Sofie; Bull, Simon Henry; Vansteenwegen, Pieter

    2017-01-01

    We propose a heuristic algorithm to build a railway line plan from scratch that minimizes passenger travel time and operator cost and for which a feasible and robust timetable exists. A line planning module and a timetabling module work iteratively and interactively. The line planning module......, but is constrained by limited shunt capacity. While the operator and passenger cost remain close to those of the initially and (for these costs) optimally built line plan, the timetable corresponding to the finally developed robust line plan significantly improves the minimum buffer time, and thus the robustness...... creates an initial line plan. The timetabling module evaluates the line plan and identifies a critical line based on minimum buffer times between train pairs. The line planning module proposes a new line plan in which the time length of the critical line is modified in order to provide more flexibility...

  5. On-line thermal margin estimation of a PWR core using a neural network approach

    International Nuclear Information System (INIS)

    Park, Soon Ok; Kim, Hyun Koon; Lee, Seung Hynk; Chang, Soon Heung

    1992-01-01

    A new approach for on-line thermal margin monitoring of a PWR Core is proposed in this paper, where a neural network model is introduced to predict the DNBR values at the given reactor operating conditions. The neural network is learned by the Back Propagation algorithm with the optimized random training data and is tested to investigate the generalized performance for the steady state operating region as well as for the transient situations where DNB is of the primary concern. The test results show that the high level of accuracy in predicting the DNBR can be achieved by the neural network model compared to the detailed code results. An insight has been gained from this study that the neural network model for estimating DNB performance can be a viable tool for on-line thermal margin monitoring of a nuclear power plant

  6. Parameter Selection for Ant Colony Algorithm Based on Bacterial Foraging Algorithm

    Directory of Open Access Journals (Sweden)

    Peng Li

    2016-01-01

    Full Text Available The optimal performance of the ant colony algorithm (ACA mainly depends on suitable parameters; therefore, parameter selection for ACA is important. We propose a parameter selection method for ACA based on the bacterial foraging algorithm (BFA, considering the effects of coupling between different parameters. Firstly, parameters for ACA are mapped into a multidimensional space, using a chemotactic operator to ensure that each parameter group approaches the optimal value, speeding up the convergence for each parameter set. Secondly, the operation speed for optimizing the entire parameter set is accelerated using a reproduction operator. Finally, the elimination-dispersal operator is used to strengthen the global optimization of the parameters, which avoids falling into a local optimal solution. In order to validate the effectiveness of this method, the results were compared with those using a genetic algorithm (GA and a particle swarm optimization (PSO, and simulations were conducted using different grid maps for robot path planning. The results indicated that parameter selection for ACA based on BFA was the superior method, able to determine the best parameter combination rapidly, accurately, and effectively.

  7. A Novel Method for Decoding Any High-Order Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2014-01-01

    Full Text Available This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.

  8. Hidden Markov Model based Mobility Learning for Improving Indoor Tracking of Mobile Users

    DEFF Research Database (Denmark)

    Pedersen, Nikolaj Bisgaard; Laursen, Troels; Nielsen, Jimmy Jessen

    2012-01-01

    Indoors, a user's movements are typically confined by walls, corridors, and doorways, and further he is typically repeating the same movements such as walking between certain points in the building. Conventional indoor localization systems do usually not take these properties of the user's moveme......Indoors, a user's movements are typically confined by walls, corridors, and doorways, and further he is typically repeating the same movements such as walking between certain points in the building. Conventional indoor localization systems do usually not take these properties of the user...... likely trajectory is then calculated using and extended version of the Viterbi algorithm. The results show significant improvements of the proposed algorithm compared to a simpler moving average smoothing....

  9. Car painting process scheduling with harmony search algorithm

    Science.gov (United States)

    Syahputra, M. F.; Maiyasya, A.; Purnamawati, S.; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.

    2018-02-01

    Automotive painting program in the process of painting the car body by using robot power, making efficiency in the production system. Production system will be more efficient if pay attention to scheduling of car order which will be done by considering painting body shape of car. Flow shop scheduling is a scheduling model in which the job-job to be processed entirely flows in the same product direction / path. Scheduling problems often arise if there are n jobs to be processed on the machine, which must be specified which must be done first and how to allocate jobs on the machine to obtain a scheduled production process. Harmony Search Algorithm is a metaheuristic optimization algorithm based on music. The algorithm is inspired by observations that lead to music in search of perfect harmony. This musical harmony is in line to find optimal in the optimization process. Based on the tests that have been done, obtained the optimal car sequence with minimum makespan value.

  10. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    Science.gov (United States)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  11. Alternative solution algorithm for coupled thermal-hydraulic problems

    International Nuclear Information System (INIS)

    Farnsworth, D.A.; Rice, J.G.

    1986-01-01

    A thermal-hydraulic system involves flow of a fluid for which a combined solution of the continuity, momentum, and energy equations is required. When the solutions of the energy and momentum fields are dependent on each other, the system is said to be thermally coupled. A common problem encountered in the numerical solution of strongly coupled thermal-hydraulic problems is a very slow rate of convergence or a complete lack of convergence. Many times this degradation in convergence is due to the lack of true coupling between the energy and momentum fields during the iteration process. In the most widely used solution algorithms - such as the SIMPLE algorithm and its many variants - a sequential solution technique is required. That is, the solution process alternates between the flow and energy fields until a converged solution is obtained. This approach allows only implicit energy-momentum coupling. To improve the convergence rate for strongly coupled problems, a practical solution algorithm that can accommodate true energy-momentum coupling terms was developed. A complete simultaneous (versus sequential) solution of the governing conservation equations utilizing a line-by-line solution was developed and direct coupling terms between the momentum and energy fields were added utilizing a modified Newton-Raphson technique

  12. Assistance algorithm of nursing for amiodarone intravenous infusion

    Directory of Open Access Journals (Sweden)

    Francimar Tinoco de Oliveira

    2014-12-01

    Full Text Available This study aimed at identifying scientific publication on phlebitis caused by amiodarone and proposes a nursing care algorithm for interventions in intravenous amiodarone administration grounded in the Infusion Nursing Society and the Center for Disease Control and Prevention. It is a descriptive study mediated by integrative review in MedLine, LILACS, IBECS, BDENF, Cochrane Library and Scielo bases, published from 2006 to 2013. The sample consisted of nine articles. The evidence pointed the incidence of phlebitis due to the infusion of amiodarone and the need to control this event. The algorithm proposed shows the materials to be used and the procedure of drug administration in order to minimize injury. Besides subsidizing the development of future studies, this algorithm also promotes the incorporation of the best recommendation for the interventionist clinical practice.

  13. Star point centroid algorithm based on background forecast

    Science.gov (United States)

    Wang, Jin; Zhao, Rujin; Zhu, Nan

    2014-09-01

    The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.

  14. EXTRACTION OF BUILDING BOUNDARY LINES FROM AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Y.-H. Tseng

    2016-10-01

    Full Text Available Building boundary lines are important spatial features that characterize the topographic maps and three-dimensional (3D city models. Airborne LiDAR Point clouds provide adequate 3D spatial information for building boundary mapping. However, information of boundary features contained in point clouds is implicit. This study focuses on developing an automatic algorithm of building boundary line extraction from airborne LiDAR data. In an airborne LiDAR dataset, top surfaces of buildings, such as roofs, tend to have densely distributed points, but vertical surfaces, such as walls, usually have sparsely distributed points or even no points. The intersection lines of roof and wall planes are, therefore, not clearly defined in point clouds. This paper proposes a novel method to extract those boundary lines of building edges. The extracted line features can be used as fundamental data to generate topographic maps of 3D city model for an urban area. The proposed method includes two major process steps. The first step is to extract building boundary points from point clouds. Then the second step is followed to form building boundary line features based on the extracted boundary points. In this step, a line fitting algorithm is developed to improve the edge extraction from LiDAR data. Eight test objects, including 4 simple low buildings and 4 complicated tall buildings, were selected from the buildings in NCKU campus. The test results demonstrate the feasibility of the proposed method in extracting complicate building boundary lines. Some results which are not as good as expected suggest the need of further improvement of the method.

  15. Algorithms for orbit control on SPEAR

    International Nuclear Information System (INIS)

    Corbett, J.; Keeley, D.; Hettel, R.; Linscott, I.; Sebek, J.

    1994-06-01

    A global orbit feedback system has been installed on SPEAR to help stabilize the position of the photon beams. The orbit control algorithms depend on either harmonic reconstruction of the orbit or eigenvector decomposition. The orbit motion is corrected by dipole corrector kicks determined from the inverse corrector-to-bpm response matrix. This paper outlines features of these control algorithms as applied to SPEAR

  16. AUTOMATIC RAILWAY POWER LINE EXTRACTION USING MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    S. Zhang

    2016-06-01

    Full Text Available Research on power line extraction technology using mobile laser point clouds has important practical significance on railway power lines patrol work. In this paper, we presents a new method for automatic extracting railway power line from MLS (Mobile Laser Scanning data. Firstly, according to the spatial structure characteristics of power-line and trajectory, the significant data is segmented piecewise. Then, use the self-adaptive space region growing method to extract power lines parallel with rails. Finally use PCA (Principal Components Analysis combine with information entropy theory method to judge a section of the power line whether is junction or not and which type of junction it belongs to. The least squares fitting algorithm is introduced to model the power line. An evaluation of the proposed method over a complicated railway point clouds acquired by a RIEGL VMX450 MLS system shows that the proposed method is promising.

  17. A real-time and closed-loop control algorithm for cascaded multilevel inverter based on artificial neural network.

    Science.gov (United States)

    Wang, Libing; Mao, Chengxiong; Wang, Dan; Lu, Jiming; Zhang, Junfeng; Chen, Xun

    2014-01-01

    In order to control the cascaded H-bridges (CHB) converter with staircase modulation strategy in a real-time manner, a real-time and closed-loop control algorithm based on artificial neural network (ANN) for three-phase CHB converter is proposed in this paper. It costs little computation time and memory. It has two steps. In the first step, hierarchical particle swarm optimizer with time-varying acceleration coefficient (HPSO-TVAC) algorithm is employed to minimize the total harmonic distortion (THD) and generate the optimal switching angles offline. In the second step, part of optimal switching angles are used to train an ANN and the well-designed ANN can generate optimal switching angles in a real-time manner. Compared with previous real-time algorithm, the proposed algorithm is suitable for a wider range of modulation index and results in a smaller THD and a lower calculation time. Furthermore, the well-designed ANN is embedded into a closed-loop control algorithm for CHB converter with variable direct voltage (DC) sources. Simulation results demonstrate that the proposed closed-loop control algorithm is able to quickly stabilize load voltage and minimize the line current's THD (<5%) when subjecting the DC sources disturbance or load disturbance. In real design stage, a switching angle pulse generation scheme is proposed and experiment results verify its correctness.

  18. A Real-Time and Closed-Loop Control Algorithm for Cascaded Multilevel Inverter Based on Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Libing Wang

    2014-01-01

    Full Text Available In order to control the cascaded H-bridges (CHB converter with staircase modulation strategy in a real-time manner, a real-time and closed-loop control algorithm based on artificial neural network (ANN for three-phase CHB converter is proposed in this paper. It costs little computation time and memory. It has two steps. In the first step, hierarchical particle swarm optimizer with time-varying acceleration coefficient (HPSO-TVAC algorithm is employed to minimize the total harmonic distortion (THD and generate the optimal switching angles offline. In the second step, part of optimal switching angles are used to train an ANN and the well-designed ANN can generate optimal switching angles in a real-time manner. Compared with previous real-time algorithm, the proposed algorithm is suitable for a wider range of modulation index and results in a smaller THD and a lower calculation time. Furthermore, the well-designed ANN is embedded into a closed-loop control algorithm for CHB converter with variable direct voltage (DC sources. Simulation results demonstrate that the proposed closed-loop control algorithm is able to quickly stabilize load voltage and minimize the line current’s THD (<5% when subjecting the DC sources disturbance or load disturbance. In real design stage, a switching angle pulse generation scheme is proposed and experiment results verify its correctness.

  19. Optimizing graph algorithms on pregel-like systems

    KAUST Repository

    Salihoglu, Semih

    2014-03-01

    We study the problem of implementing graph algorithms efficiently on Pregel-like systems, which can be surprisingly challenging. Standard graph algorithms in this setting can incur unnecessary inefficiencies such as slow convergence or high communication or computation cost, typically due to structural properties of the input graphs such as large diameters or skew in component sizes. We describe several optimization techniques to address these inefficiencies. Our most general technique is based on the idea of performing some serial computation on a tiny fraction of the input graph, complementing Pregel\\'s vertex-centric parallelism. We base our study on thorough implementations of several fundamental graph algorithms, some of which have, to the best of our knowledge, not been implemented on Pregel-like systems before. The algorithms and optimizations we describe are fully implemented in our open-source Pregel implementation. We present detailed experiments showing that our optimization techniques improve runtime significantly on a variety of very large graph datasets.

  20. Testing block subdivision algorithms on block designs

    Science.gov (United States)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  1. The MIGHTI Wind Retrieval Algorithm: Description and Verification

    Science.gov (United States)

    Harding, Brian J.; Makela, Jonathan J.; Englert, Christoph R.; Marr, Kenneth D.; Harlander, John M.; England, Scott L.; Immel, Thomas J.

    2017-10-01

    We present an algorithm to retrieve thermospheric wind profiles from measurements by the Michelson Interferometer for Global High-resolution Thermospheric Imaging (MIGHTI) instrument on NASA's Ionospheric Connection Explorer (ICON) mission. MIGHTI measures interferometric limb images of the green and red atomic oxygen emissions at 557.7 nm and 630.0 nm, spanning 90-300 km. The Doppler shift of these emissions represents a remote measurement of the wind at the tangent point of the line of sight. Here we describe the algorithm which uses these images to retrieve altitude profiles of the line-of-sight wind. By combining the measurements from two MIGHTI sensors with perpendicular lines of sight, both components of the vector horizontal wind are retrieved. A comprehensive truth model simulation that is based on TIME-GCM winds and various airglow models is used to determine the accuracy and precision of the MIGHTI data product. Accuracy is limited primarily by spherical asymmetry of the atmosphere over the spatial scale of the limb observation, a fundamental limitation of space-based wind measurements. For 80% of the retrieved wind samples, the accuracy is found to be better than 5.8 m/s (green) and 3.5 m/s (red). As expected, significant errors are found near the day/night boundary and occasionally near the equatorial ionization anomaly, due to significant variations of wind and emission rate along the line of sight. The precision calculation includes pointing uncertainty and shot, read, and dark noise. For average solar minimum conditions, the expected precision meets requirements, ranging from 1.2 to 4.7 m/s.

  2. Near-minimum bit-error rate equalizer adaptation for PRML systems

    NARCIS (Netherlands)

    Riani, J.; Beneden, van S.J.L; Bergmans, J.W.M.; Immink, A.H.J.

    2007-01-01

    Receivers for partial response maximum-likelihood systems typically use a linear equalizer followed by a Viterbi detector. The equalizer tries to confine the channel intersymbol interference to a short span in order to limit the implementation complexity of the Viterbi detector. Equalization is

  3. Research on machine learning framework based on random forest algorithm

    Science.gov (United States)

    Ren, Qiong; Cheng, Hui; Han, Hai

    2017-03-01

    With the continuous development of machine learning, industry and academia have released a lot of machine learning frameworks based on distributed computing platform, and have been widely used. However, the existing framework of machine learning is limited by the limitations of machine learning algorithm itself, such as the choice of parameters and the interference of noises, the high using threshold and so on. This paper introduces the research background of machine learning framework, and combined with the commonly used random forest algorithm in machine learning classification algorithm, puts forward the research objectives and content, proposes an improved adaptive random forest algorithm (referred to as ARF), and on the basis of ARF, designs and implements the machine learning framework.

  4. An adaptive occlusion culling algorithm for use in large ves

    DEFF Research Database (Denmark)

    Bormann, Karsten

    2000-01-01

    The Hierarchical Occlusion Map algorithm is combined with Frustum Slicing to give a simpler occlusion-culling algorithm that more adequately caters to large, open VEs. The algorithm adapts to the level of visual congestion and is well suited for use with large, complex models with long mean free ...... line of sight ('the great outdoors'), models for which it is not feasible to construct, or search, a database of occluders to be rendered each frame....

  5. Sparse spectral deconvolution algorithm for noncartesian MR spectroscopic imaging.

    Science.gov (United States)

    Bhave, Sampada; Eslami, Ramin; Jacob, Mathews

    2014-02-01

    To minimize line shape distortions and spectral leakage artifacts in MR spectroscopic imaging (MRSI). A spatially and spectrally regularized non-Cartesian MRSI algorithm that uses the line shape distortion priors, estimated from water reference data, to deconvolve the spectra is introduced. Sparse spectral regularization is used to minimize noise amplification associated with deconvolution. A spiral MRSI sequence that heavily oversamples the central k-space regions is used to acquire the MRSI data. The spatial regularization term uses the spatial supports of brain and extracranial fat regions to recover the metabolite spectra and nuisance signals at two different resolutions. Specifically, the nuisance signals are recovered at the maximum resolution to minimize spectral leakage, while the point spread functions of metabolites are controlled to obtain acceptable signal-to-noise ratio. The comparisons of the algorithm against Tikhonov regularized reconstructions demonstrates considerably reduced line-shape distortions and improved metabolite maps. The proposed sparsity constrained spectral deconvolution scheme is effective in minimizing the line-shape distortions. The dual resolution reconstruction scheme is capable of minimizing spectral leakage artifacts. Copyright © 2013 Wiley Periodicals, Inc.

  6. Evaluation of software sensors for on-line estimation of culture conditions in an Escherichia coli cultivation expressing a recombinant protein.

    Science.gov (United States)

    Warth, Benedikt; Rajkai, György; Mandenius, Carl-Fredrik

    2010-05-03

    Software sensors for monitoring and on-line estimation of critical bioprocess variables have mainly been used with standard bioreactor sensors, such as electrodes and gas analyzers, where algorithms in the software model have generated the desired state variables. In this article we propose that other on-line instruments, such as NIR probes and on-line HPLC, should be used to make more reliable and flexible software sensors. Five software sensor architectures were compared and evaluated: (1) biomass concentration from an on-line NIR probe, (2) biomass concentration from titrant addition, (3) specific growth rate from titrant addition, (4) specific growth rate from the NIR probe, and (5) specific substrate uptake rate and by-product rate from on-line HPLC and NIR probe signals. The software sensors were demonstrated on an Escherichia coli cultivation expressing a recombinant protein, green fluorescent protein (GFP), but the results could be extrapolated to other production organisms and product proteins. We conclude that well-maintained on-line instrumentation (hardware sensors) can increase the potential of software sensors. This would also strongly support the intentions with process analytical technology and quality-by-design concepts. 2010 Elsevier B.V. All rights reserved.

  7. a New Graduation Algorithm for Color Balance of Remote Sensing Image

    Science.gov (United States)

    Zhou, G.; Liu, X.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Pan, Q.

    2018-05-01

    In order to expand the field of view to obtain more data and information when doing research on remote sensing image, workers always need to mosaicking images together. However, the image after mosaic always has the large color differences and produces the gap line. This paper based on the graduation algorithm of tarigonometric function proposed a new algorithm of Two Quarter-rounds Curves (TQC). The paper uses the Gaussian filter to solve the program about the image color noise and the gap line. The paper used one of Greenland compiled data acquired in 1963 from Declassified Intelligence Photography Project (DISP) by ARGON KH-5 satellite, and used the photography of North Gulf, China, by Landsat satellite to experiment. The experimental results show that the proposed method has improved the accuracy of the results in two parts: on the one hand, for the large color differences remote sensing image will become more balanced. On the other hands, the remote sensing image will achieve more smooth transition.

  8. Quantum walk on a line with two entangled particles

    International Nuclear Information System (INIS)

    Omar, Y.; Paunkovic, N.; Sheridan, L.; Bose, S.; Mateus, P.

    2005-01-01

    Full text: We introduce the concept of a quantum walk with two particles and study it for the case of a discrete time walk on a line. A quantum walk with more than one particle may contain entanglement, thus offering a resource unavailable in the classical scenario and which can present interesting advantages. In this work, we show how the entanglement and the relative phase between the states describing the coin degree of freedom of each particle will influence the evolution of the quantum walk. In particular, the probability to find at least one particle in a certain position after N steps of the walk, as well as the average distance between the two particles, can be larger or smaller than the case of two unentangled particles, depending on the initial conditions we choose. This resource can then be tuned according to our needs, in particular to enhance a given application (algorithmic or other) based on a quantum walk. Experimental implementations are briefly discussed. (author)

  9. Optimizing graph algorithms on pregel-like systems

    KAUST Repository

    Salihoglu, Semih; Widom, Jennifer

    2014-01-01

    We study the problem of implementing graph algorithms efficiently on Pregel-like systems, which can be surprisingly challenging. Standard graph algorithms in this setting can incur unnecessary inefficiencies such as slow convergence or high

  10. A recursive economic dispatch algorithm for assessing the cost of thermal generator schedules

    International Nuclear Information System (INIS)

    Wong, K.P.; Doan, K.

    1992-01-01

    This paper develops an efficient, recursive algorithm for determining the economic power dispatch of thermal generators within the unit commitment environment. A method for incorporating the operation limits of all on-line generators and limits due to ramping generators is developed in the paper. The developed algorithm is amenable for computer implementation using the artificial intelligence programming language, Prolog. The performance of the developed algorithm is demonstrated through its application to evaluate the costs of dispatching 13 thermal generators within a generator schedule in a 24-hour schedule horizon

  11. Algorithm Research of Individualized Travelling Route Recommendation Based on Similarity

    Directory of Open Access Journals (Sweden)

    Xue Shan

    2015-01-01

    Full Text Available Although commercial recommendation system has made certain achievement in travelling route development, the recommendation system is facing a series of challenges because of people’s increasing interest in travelling. It is obvious that the core content of the recommendation system is recommendation algorithm. The advantages of recommendation algorithm can bring great effect to the recommendation system. Based on this, this paper applies traditional collaborative filtering algorithm for analysis. Besides, illustrating the deficiencies of the algorithm, such as the rating unicity and rating matrix sparsity, this paper proposes an improved algorithm combing the multi-similarity algorithm based on user and the element similarity algorithm based on user, so as to compensate for the deficiencies that traditional algorithm has within a controllable range. Experimental results have shown that the improved algorithm has obvious advantages in comparison with the traditional one. The improved algorithm has obvious effect on remedying the rating matrix sparsity and rating unicity.

  12. Optimization of X-ray phase-contrast imaging based on in-line holography

    International Nuclear Information System (INIS)

    Wu Xizeng; Liu Hong; Yan Aimin

    2005-01-01

    This paper introduces a newly conceived formalism for clinical in-line phase-contrast X-ray imaging. The new formalism applies not only to ideal 'thin' objects analyzed in previous studies, but also applies to the real-world tissues used in actual clinical practice. Moreover we have identified the four clinically important factors that affect phase-contrast characteristics. These factors are: (1) body part attenuation (2) the spatial coherence of incident X-rays from an X-ray tube (3) the polychromatic nature of the X-ray source and (4) radiation dose to patients for clinical applications. Techniques of phase image-reconstruction based on the new X-ray in-line holography theory are discussed. Numerical simulations are described which were used to validate the theory. The design parameters of an optimal clinical phase-contrast mammographic imaging system which were determined based on the new theory, and validated in the simulations, are presented. The theory, image reconstruction algorithms, and numerical simulation techniques presented in this paper can be applied widely to clinical diagnostic X-ray imaging applications

  13. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm.

    Science.gov (United States)

    Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A; Ravankar, Abhijeet

    2018-04-23

    In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.

  14. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm

    Directory of Open Access Journals (Sweden)

    Yun-Ting Wang

    2018-04-01

    Full Text Available In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.

  15. Fast algorithms for transport models. Final report

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1994-01-01

    This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))

  16. Algorithm for detection of the broken phase conductor in the radial networks

    Directory of Open Access Journals (Sweden)

    Ostojić Mladen M.

    2016-01-01

    Full Text Available The paper presents an algorithm for a directional relay to be used for a detection of the broken phase conductor in the radial networks. The algorithm would use synchronized voltages, measured at the beginning and at the end of the line, as input signals. During the process, the measured voltages would be phase-compared. On the basis of the normalized energy, the direction of the phase conductor, with a broken point, would be detected. Software tool Matlab/Simulink package has developed a radial network model which simulates the broken phase conductor. The simulations generated required input signals by which the algorithm was tested. Development of the algorithm along with the formation of the simulation model and the test results of the proposed algorithm are presented in this paper.

  17. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    Science.gov (United States)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  18. Transmission network expansion planning based on hybridization model of neural networks and harmony search algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Ameli

    2012-01-01

    Full Text Available Transmission Network Expansion Planning (TNEP is a basic part of power network planning that determines where, when and how many new transmission lines should be added to the network. So, the TNEP is an optimization problem in which the expansion purposes are optimized. Artificial Intelligence (AI tools such as Genetic Algorithm (GA, Simulated Annealing (SA, Tabu Search (TS and Artificial Neural Networks (ANNs are methods used for solving the TNEP problem. Today, by using the hybridization models of AI tools, we can solve the TNEP problem for large-scale systems, which shows the effectiveness of utilizing such models. In this paper, a new approach to the hybridization model of Probabilistic Neural Networks (PNNs and Harmony Search Algorithm (HSA was used to solve the TNEP problem. Finally, by considering the uncertain role of the load based on a scenario technique, this proposed model was tested on the Garver’s 6-bus network.

  19. On-line structural damage localization and quantification using wireless sensors

    International Nuclear Information System (INIS)

    Hsu, Ting-Yu; Huang, Shieh-Kung; Lu, Kung-Chung; Loh, Chin-Hsiung; Wang, Yang; Lynch, Jerome Peter

    2011-01-01

    In this paper, a wireless sensing system is designed to realize on-line damage localization and quantification of a structure using a frequency response function change method (FRFCM). Data interrogation algorithms are embedded in the computational core of the wireless sensing units to extract the necessary structural features, i.e. the frequency spectrum segments around eigenfrequencies, automatically from measured structural response for the FRFCM. Instead of the raw time history of the structural response, the extracted compact structural features are transmitted to the host computer. As a result, with less data transmitted from the wireless sensors, the energy consumed by the wireless transmission is reduced. To validate the performance of the proposed wireless sensing system, a six-story steel building with replaceable bracings in each story is instrumented with the wireless sensors for on-line damage detection during shaking table tests. The accuracy of the damage detection results using the wireless sensing system is verified through comparison with the results calculated from data recorded of a traditional wired monitoring system. The results demonstrate that, by taking advantage of collocated computing resources in wireless sensors, the proposed wireless sensing system can locate and quantify damage with acceptable accuracy and moderate energy efficiency

  20. Effects of visualization on algorithm comprehension

    Science.gov (United States)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  1. An assembly sequence planning method based on composite algorithm

    Directory of Open Access Journals (Sweden)

    Enfu LIU

    2016-02-01

    Full Text Available To solve the combination explosion problem and the blind searching problem in assembly sequence planning of complex products, an assembly sequence planning method based on composite algorithm is proposed. In the composite algorithm, a sufficient number of feasible assembly sequences are generated using formalization reasoning algorithm as the initial population of genetic algorithm. Then fuzzy knowledge of assembly is integrated into the planning process of genetic algorithm and ant algorithm to get the accurate solution. At last, an example is conducted to verify the feasibility of composite algorithm.

  2. Airport Traffic Conflict Detection and Resolution Algorithm Evaluation

    Science.gov (United States)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Ballard, Kathryn M.; Otero, Sharon D.; Barker, Glover D.

    2016-01-01

    Two conflict detection and resolution (CD&R) algorithms for the terminal maneuvering area (TMA) were evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. One CD&R algorithm, developed at NASA, was designed to enhance surface situation awareness and provide cockpit alerts of potential conflicts during runway, taxi, and low altitude air-to-air operations. The second algorithm, Enhanced Traffic Situation Awareness on the Airport Surface with Indications and Alerts (SURF IA), was designed to increase flight crew awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the aircraft-based CD&R algorithms during various runway, taxiway, and low altitude scenarios, multiple levels of CD&R system equipage, and various levels of horizontal position accuracy. Algorithm performance was assessed through various metrics including the collision rate, nuisance and missed alert rate, and alert toggling rate. The data suggests that, in general, alert toggling, nuisance and missed alerts, and unnecessary maneuvering occurred more frequently as the position accuracy was reduced. Collision avoidance was more effective when all of the aircraft were equipped with CD&R and maneuvered to avoid a collision after an alert was issued. In order to reduce the number of unwanted (nuisance) alerts when taxiing across a runway, a buffer is needed between the hold line and the alerting zone so alerts are not generated when an aircraft is behind the hold line. All of the results support RTCA horizontal position accuracy requirements for performing a CD&R function to reduce the likelihood and severity of runway incursions and collisions.

  3. Rational reflection coefficient and inverse scattering on the line

    International Nuclear Information System (INIS)

    Sabatier, P.C.

    1983-01-01

    Inverse scattering for the Schroedinger equation on the line is studied for reflection and transmission coefficients that satisfy the usual regularity conditions and are rational functions of k. The origin is still a particular point, but the potentials do not need to be cut at this point like in previous studies. Giving up this restriction corresponds to the existence of poles for both reflection coefficients in both upper and lower half k-planes. It is shown that the problem reduces to solving a linear algebraic system. A different algorithm, made of a sequence of Darboux-Backlund transforms, gives also the solution in closed form and enables to study separately modifications of both sides of the potential due to the introduction of poles. Thus it paves the way for approximation studies. Generalizations and particular problems will be studied in forthcoming papers

  4. Extension algorithm for generic low-voltage networks

    Science.gov (United States)

    Marwitz, S.; Olk, C.

    2018-02-01

    Distributed energy resources (DERs) are increasingly penetrating the energy system which is driven by climate and sustainability goals. These technologies are mostly connected to low- voltage electrical networks and change the demand and supply situation in these networks. This can cause critical network states. Network topologies vary significantly and depend on several conditions including geography, historical development, network design or number of network connections. In the past, only some of these aspects were taken into account when estimating the network investment needs for Germany on the low-voltage level. Typically, fixed network topologies are examined or a Monte Carlo approach is used to quantify the investment needs at this voltage level. Recent research has revealed that DERs differ substantially between rural, suburban and urban regions. The low-voltage network topologies have different design concepts in these regions, so that different network topologies have to be considered when assessing the need for network extensions and investments due to DERs. An extension algorithm is needed to calculate network extensions and investment needs for the different typologies of generic low-voltage networks. We therefore present a new algorithm, which is capable of calculating the extension for generic low-voltage networks of any given topology based on voltage range deviations and thermal overloads. The algorithm requires information about line and cable lengths, their topology and the network state only. We test the algorithm on a radial, a loop, and a heavily meshed network. Here we show that the algorithm functions for electrical networks with these topologies. We found that the algorithm is able to extend different networks efficiently by placing cables between network nodes. The main value of the algorithm is that it does not require any information about routes for additional cables or positions for additional substations when it comes to estimating

  5. Biased Monte Carlo algorithms on unitary groups

    International Nuclear Information System (INIS)

    Creutz, M.; Gausterer, H.; Sanielevici, S.

    1989-01-01

    We introduce a general updating scheme for the simulation of physical systems defined on unitary groups, which eliminates the systematic errors due to inexact exponentiation of algebra elements. The essence is to work directly with group elements for the stochastic noise. Particular cases of the scheme include the algorithm of Metropolis et al., overrelaxation algorithms, and globally corrected Langevin and hybrid algorithms. The latter are studied numerically for the case of SU(3) theory

  6. A New Track Reconstruction Algorithm suitable for Parallel Processing based on Hit Triplets and Broken Lines

    Directory of Open Access Journals (Sweden)

    Schöning André

    2016-01-01

    Full Text Available Track reconstruction in high track multiplicity environments at current and future high rate particle physics experiments is a big challenge and very time consuming. The search for track seeds and the fitting of track candidates are usually the most time consuming steps in the track reconstruction. Here, a new and fast track reconstruction method based on hit triplets is proposed which exploits a three-dimensional fit model including multiple scattering and hit uncertainties from the very start, including the search for track seeds. The hit triplet based reconstruction method assumes a homogeneous magnetic field which allows to give an analytical solutions for the triplet fit result. This method is highly parallelizable, needs fewer operations than other standard track reconstruction methods and is therefore ideal for the implementation on parallel computing architectures. The proposed track reconstruction algorithm has been studied in the context of the Mu3e-experiment and a typical LHC experiment.

  7. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  8. Factored Facade Acquisition using Symmetric Line Arrangements

    KAUST Repository

    Ceylan, Duygu

    2012-05-01

    We introduce a novel framework for image-based 3D reconstruction of urban buildings based on symmetry priors. Starting from image-level edges, we generate a sparse and approximate set of consistent 3D lines. These lines are then used to simultaneously detect symmetric line arrangements while refining the estimated 3D model. Operating both on 2D image data and intermediate 3D feature representations, we perform iterative feature consolidation and effective outlier pruning, thus eliminating reconstruction artifacts arising from ambiguous or wrong stereo matches. We exploit non-local coherence of symmetric elements to generate precise model reconstructions, even in the presence of a significant amount of outlier image-edges arising from reflections, shadows, outlier objects, etc. We evaluate our algorithm on several challenging test scenarios, both synthetic and real. Beyond reconstruction, the extracted symmetry patterns are useful towards interactive and intuitive model manipulations.

  9. Hardware Implementation Of Line Clipping A lgorithm By Using FPGA

    Directory of Open Access Journals (Sweden)

    Amar Dawod

    2013-04-01

    Full Text Available The computer graphics system performance is increasing faster than any other computing application. Algorithms for line clipping against convex polygons and lines have been studied for a long time and many research papers have been published so far. In spite of the latest graphical hardware development and significant increase of performance the clipping is still a bottleneck of any graphical system. So its implementation in hardware is essential for real time applications. In this paper clipping operation is discussed and a hardware implementation of the line clipping algorithm is presented and finally formulated and tested using Field Programmable Gate Arrays (FPGA. The designed hardware unit consists of two parts : the first is positional code generator unit and the second is the clipping unit. Finally it is worth mentioning that the  designed unit is capable of clipping (232524 line segments per second.       

  10. A parallel algorithm for filtering gravitational waves from coalescing binaries

    International Nuclear Information System (INIS)

    Sathyaprakash, B.S.; Dhurandhar, S.V.

    1992-10-01

    Coalescing binary stars are perhaps the most promising sources for the observation of gravitational waves with laser interferometric gravity wave detectors. The waveform from these sources can be predicted with sufficient accuracy for matched filtering techniques to be applied. In this paper we present a parallel algorithm for detecting signals from coalescing compact binaries by the method of matched filtering. We also report the details of its implementation on a 256-node connection machine consisting of a network of transputers. The results of our analysis indicate that parallel processing is a promising approach to on-line analysis of data from gravitational wave detectors to filter out coalescing binary signals. The algorithm described is quite general in that the kernel of the algorithm is applicable to any set of matched filters. (author). 15 refs, 4 figs

  11. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    Science.gov (United States)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  12. SOL: A Library for Scalable Online Learning Algorithms

    OpenAIRE

    Wu, Yue; Hoi, Steven C. H.; Liu, Chenghao; Lu, Jing; Sahoo, Doyen; Yu, Nenghai

    2016-01-01

    SOL is an open-source library for scalable online learning algorithms, and is particularly suitable for learning with high-dimensional data. The library provides a family of regular and sparse online learning algorithms for large-scale binary and multi-class classification tasks with high efficiency, scalability, portability, and extensibility. SOL was implemented in C++, and provided with a collection of easy-to-use command-line tools, python wrappers and library calls for users and develope...

  13. An efficient quantum algorithm for spectral estimation

    Science.gov (United States)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  14. An opposition-based harmony search algorithm for engineering optimization problems

    Directory of Open Access Journals (Sweden)

    Abhik Banerjee

    2014-03-01

    Full Text Available Harmony search (HS is a derivative-free real parameter optimization algorithm. It draws inspiration from the musical improvisation process of searching for a perfect state of harmony. The proposed opposition-based HS (OHS of the present work employs opposition-based learning for harmony memory initialization and also for generation jumping. The concept of opposite number is utilized in OHS to improve the convergence rate of the HS algorithm. The potential of the proposed algorithm is assessed by means of an extensive comparative study of the numerical results on sixteen benchmark test functions. Additionally, the effectiveness of the proposed algorithm is tested for reactive power compensation of an autonomous power system. For real-time reactive power compensation of the studied model, Takagi Sugeno fuzzy logic (TSFL is employed. Time-domain simulation reveals that the proposed OHS-TSFL yields on-line, off-nominal model parameters, resulting in real-time incremental change in terminal voltage response profile.

  15. Solving SAT Problem Based on Hybrid Differential Evolution Algorithm

    Science.gov (United States)

    Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan

    Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.

  16. 2nd International Conference on Harmony Search Algorithm

    CERN Document Server

    Geem, Zong

    2016-01-01

    The Harmony Search Algorithm (HSA) is one of the most well-known techniques in the field of soft computing, an important paradigm in the science and engineering community.  This volume, the proceedings of the 2nd International Conference on Harmony Search Algorithm 2015 (ICHSA 2015), brings together contributions describing the latest developments in the field of soft computing with a special focus on HSA techniques. It includes coverage of new methods that have potentially immense application in various fields. Contributed articles cover aspects of the following topics related to the Harmony Search Algorithm: analytical studies; improved, hybrid and multi-objective variants; parameter tuning; and large-scale applications.  The book also contains papers discussing recent advances on the following topics: genetic algorithms; evolutionary strategies; the firefly algorithm and cuckoo search; particle swarm optimization and ant colony optimization; simulated annealing; and local search techniques.   This book ...

  17. Second-order accurate volume-of-fluid algorithms for tracking material interfaces

    International Nuclear Information System (INIS)

    Pilliod, James Edward; Puckett, Elbridge Gerry

    2004-01-01

    We introduce two new volume-of-fluid interface reconstruction algorithms and compare the accuracy of these algorithms to four other widely used volume-of-fluid interface reconstruction algorithms. We find that when the interface is smooth (e.g., continuous with two continuous derivatives) the new methods are second-order accurate and the other algorithms are first-order accurate. We propose a design criteria for a volume-of-fluid interface reconstruction algorithm to be second-order accurate. Namely, that it reproduce lines in two space dimensions or planes in three space dimensions exactly. We also introduce a second-order, unsplit, volume-of-fluid advection algorithm that is based on a second-order, finite difference method for scalar conservation laws due to Bell, Dawson and Shubin. We test this advection algorithm by modeling several different interface shapes propagating in two simple incompressible flows and compare the results with the standard second-order, operator-split advection algorithm. Although both methods are second-order accurate when the interface is smooth, we find that the unsplit algorithm exhibits noticeably better resolution in regions where the interface has discontinuous derivatives, such as at corners

  18. The Optimization of Passengers’ Travel Time under Express-Slow Mode Based on Suburban Line

    Directory of Open Access Journals (Sweden)

    Xiaobing Ding

    2016-01-01

    Full Text Available The suburban line connects the suburbs and the city centre; it is of huge advantage to attempt the express-slow mode. The passengers’ average travel time is the key factor to reflect the level of rail transport services, especially under the express-slow mode. So it is important to study the passengers’ average travel time under express-slow, which can get benefit on the optimization of operation scheme. First analyze the main factor that affects passengers’ travel time and then mine the dynamic interactive relationship among the factors. Second, a new passengers’ travel time evolution algorithm is proposed after studying the stop schedule and the proportion of express/slow train, and then membrane computing theory algorithm is introduced to solve the model. Finally, Shanghai Metro Line 22 is set as an example to apply the optimization model to calculate the total passengers’ travel time; the result shows that the total average travel time under the express-slow mode can save 1 minute and 38 seconds; the social influence and value of it are very huge. The proposed calculation model is of great help for the decision of stop schedule and provides theoretical and methodological support to determine the proportion of express/slow trains, improves the service level, and enriches and complements the rail transit operation scheme optimization theory system.

  19. Development of a portable computed tomographic scanner for on-line imaging of industrial piping systems

    International Nuclear Information System (INIS)

    Jaafar Abdullah; Mohd Arif Hamzah; Mohd Soyapi Mohd Yusof; Mohd Fitri Abdul Rahman; Fadil IsmaiI; Rasif Mohd Zain

    2003-01-01

    Computed tomography (CT) technology is being increasingly developed for industrial application. This paper presents the development of a portable computed tomographic scanner for on?line imaging of industrial piping systems. The theoretical approach, the system hardware, the data acquisition system and the adopted algorithm for image reconstruction are discussed. The scanner has large potential to be used to determine the extent of corrosion under insulation (CUI), to detect blockages, to measure the thickness of deposit/materials built-up on the walls and to improve understanding of material flow in pipelines. (Author)

  20. Implementation and analysis of list mode algorithm using tubes of response on a dedicated brain and breast PET

    Science.gov (United States)

    Moliner, L.; Correcher, C.; González, A. J.; Conde, P.; Hernández, L.; Orero, A.; Rodríguez-Álvarez, M. J.; Sánchez, F.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.

    2013-02-01

    In this work we present an innovative algorithm for the reconstruction of PET images based on the List-Mode (LM) technique which improves their spatial resolution compared to results obtained with current MLEM algorithms. This study appears as a part of a large project with the aim of improving diagnosis in early Alzheimer disease stages by means of a newly developed hybrid PET-MR insert. At the present, Alzheimer is the most relevant neurodegenerative disease and the best way to apply an effective treatment is its early diagnosis. The PET device will consist of several monolithic LYSO crystals coupled to SiPM detectors. Monolithic crystals can reduce scanner costs with the advantage to enable implementation of very small virtual pixels in their geometry. This is especially useful for LM reconstruction algorithms, since they do not need a pre-calculated system matrix. We have developed an LM algorithm which has been initially tested with a large aperture (186 mm) breast PET system. Such an algorithm instead of using the common lines of response, incorporates a novel calculation of tubes of response. The new approach improves the volumetric spatial resolution about a factor 2 at the border of the field of view when compared with traditionally used MLEM algorithm. Moreover, it has also shown to decrease the image noise, thus increasing the image quality.

  1. Implementation and analysis of list mode algorithm using tubes of response on a dedicated brain and breast PET

    International Nuclear Information System (INIS)

    Moliner, L.; Correcher, C.; González, A.J.; Conde, P.; Hernández, L.; Orero, A.; Rodríguez-Álvarez, M.J.; Sánchez, F.; Soriano, A.; Vidal, L.F.; Benlloch, J.M.

    2013-01-01

    In this work we present an innovative algorithm for the reconstruction of PET images based on the List-Mode (LM) technique which improves their spatial resolution compared to results obtained with current MLEM algorithms. This study appears as a part of a large project with the aim of improving diagnosis in early Alzheimer disease stages by means of a newly developed hybrid PET-MR insert. At the present, Alzheimer is the most relevant neurodegenerative disease and the best way to apply an effective treatment is its early diagnosis. The PET device will consist of several monolithic LYSO crystals coupled to SiPM detectors. Monolithic crystals can reduce scanner costs with the advantage to enable implementation of very small virtual pixels in their geometry. This is especially useful for LM reconstruction algorithms, since they do not need a pre-calculated system matrix. We have developed an LM algorithm which has been initially tested with a large aperture (186 mm) breast PET system. Such an algorithm instead of using the common lines of response, incorporates a novel calculation of tubes of response. The new approach improves the volumetric spatial resolution about a factor 2 at the border of the field of view when compared with traditionally used MLEM algorithm. Moreover, it has also shown to decrease the image noise, thus increasing the image quality

  2. Performance Comparison of Different System Identification Algorithms for FACET and ATF2

    CERN Document Server

    Pfingstner, J; Schulte, D

    2013-01-01

    Good system knowledge is an essential ingredient for the operation of modern accelerator facilities. For example, beam-based alignment algorithms and orbit feedbacks rely strongly on a precise measurement of the orbit response matrix. The quality of the measurement of this matrix can be improved over time by statistically combining the effects of small system excitations with the help of system identification algorithms. These small excitations can be applied in a parasitic mode without stopping the accelerator operation (on-line). In this work, different system identification algorithms are used in simulation studies for the response matrix measurement at ATF2. The results for ATF2 are finally compared with the results for FACET, latter originating from an earlier work.

  3. Research on Palmprint Identification Method Based on Quantum Algorithms

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  4. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  5. Evaluation of the influence of dominance rules for the assembly line design problem under consideration of product design alternatives

    Science.gov (United States)

    Oesterle, Jonathan; Lionel, Amodeo

    2018-06-01

    The current competitive situation increases the importance of realistically estimating product costs during the early phases of product and assembly line planning projects. In this article, several multi-objective algorithms using difference dominance rules are proposed to solve the problem associated with the selection of the most effective combination of product and assembly lines. The list of developed algorithms includes variants of ant colony algorithms, evolutionary algorithms and imperialist competitive algorithms. The performance of each algorithm and dominance rule is analysed by five multi-objective quality indicators and fifty problem instances. The algorithms and dominance rules are ranked using a non-parametric statistical test.

  6. A difference tracking algorithm based on discrete sine transform

    Science.gov (United States)

    Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun

    2018-04-01

    Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.

  7. Analog Circuit Design Optimization Based on Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Mansour Barari

    2014-01-01

    Full Text Available This paper investigates an evolutionary-based designing system for automated sizing of analog integrated circuits (ICs. Two evolutionary algorithms, genetic algorithm and PSO (Parswal particle swarm optimization algorithm, are proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through specific electrical simulation, to the optimization system in the MATLAB environment, for the selected topology. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met. Comparisons with available methods like genetic algorithms show that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.

  8. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  9. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  10. A scalable and practical one-pass clustering algorithm for recommender system

    Science.gov (United States)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  11. Multi-step lining-up correction of the CLIC trajectory

    CERN Document Server

    D'Amico, T E

    1999-01-01

    In the CLIC main linac it is very important to minimise the trajectory excursion and consequently the emittance dilution in order to obtain the required luminosity. Several algorithms have been proposed and lately the ballistic method has proved to be very effective. The trajectory correction method described hereafter retains the main advantages of the latter while adding some interesting features. It is based on the separation of the unknown variables like the quadrupole misalignments, the offset and slope of the injection straight line and the misalignments of the beam position monitors (BPM). This is achieved by referring the trajectory relatively to the injection line and not to the average pre-alignment line and by using two trajectories each corresponding to slightly different quadrupole strengths. A reference straight line is then derived onto which the beam is bent by a kick obtained by moving the first quadrupole. The other quadrupoles are then aligned on that line. The quality of the correction dep...

  12. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    Science.gov (United States)

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. System engineering approach to GPM retrieval algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rose, C. R. (Chris R.); Chandrasekar, V.

    2004-01-01

    calculated at each bin, the rain rate can then be calculated based on a suitable rain-rate model. This paper develops a system engineering interface to the retrieval algorithms while remaining cognizant of system engineering issues so that it can be used to bridge the divide between algorithm physics an d overall mission requirements. Additionally, in line with the systems approach, a methodology is developed such that the measurement requirements pass through the retrieval model and other subsystems and manifest themselves as measurement and other system constraints. A systems model has been developed for the retrieval algorithm that can be evaluated through system-analysis tools such as MATLAB/Simulink.

  14. Purity, adulteration and price of drugs bought on-line versus off-line in the Netherlands.

    Science.gov (United States)

    van der Gouwe, Daan; Brunt, Tibor M; van Laar, Margriet; van der Pol, Peggy

    2017-04-01

    On-line drug markets flourish and consumers have high expectations of on-line quality and drug value. The aim of this study was to (i) describe on-line drug purchases and (ii) compare on-line with off-line purchased drugs regarding purity, adulteration and price. Comparison of laboratory analyses of 32 663 drug consumer samples (stimulants and hallucinogens) purchased between January 2013 and January 2016, 928 of which were bought on-line. The Netherlands. Primary outcome measures were (i) the percentage of samples purchased on-line and (ii) the chemical purity of powders (or dosage per tablet); adulteration; and the price per gram, blotter or tablet of drugs bought on-line compared with drugs bought off-line. The proportion of drug samples purchased on-line increased from 1.4% in 2013 to 4.1% in 2015. The frequency varied widely, from a maximum of 6% for controlled, traditional substances [ecstasy tablets, 3,4-methylenedioxy-methamphetamine (MDMA) powder, amphetamine powder, cocaine powder, 4-bromo-2,5-dimethoxyphenethylamine (2C-B) and lysergic acid diethylamide (LSD)] to more than a third for new psychoactive substances (NPS) [4-fluoroamphetamine (4-FA), 5/6-(2-aminopropyl)benzofuran (5/6-APB) and methoxetamine (MXE)]. There were no large differences in drug purity, yet small but statistically significant differences were found for 4-FA (on-line 59% versus off-line 52% purity for 4-FA on average, P = 0.001), MDMA powders (45 versus 61% purity for MDMA, P = 0.02), 2C-B tablets (21 versus 10 mg 2C-B/tablet dosage, P = 0.49) and ecstasy tablets (131 versus 121 mg MDMA/tablet dosage, P = 0.05). The proportion of adulterated samples purchased on-line and off-line did not differ, except for 4-FA powder, being less adulterated on-line (χ 2  = 8.3; P < 0.02). Drug prices were mainly higher on-line, ranging for various drugs from 10 to 23% higher than that of drugs purchased off-line (six of 10 substances: P < 0.05). Dutch drug users increasingly

  15. Text Clustering Algorithm Based on Random Cluster Core

    Directory of Open Access Journals (Sweden)

    Huang Long-Jun

    2016-01-01

    Full Text Available Nowadays clustering has become a popular text mining algorithm, but the huge data can put forward higher requirements for the accuracy and performance of text mining. In view of the performance bottleneck of traditional text clustering algorithm, this paper proposes a text clustering algorithm with random features. This is a kind of clustering algorithm based on text density, at the same time using the neighboring heuristic rules, the concept of random cluster is introduced, which effectively reduces the complexity of the distance calculation.

  16. Modular production line optimization: The exPLORE architecture

    Directory of Open Access Journals (Sweden)

    Spinellis Diomidis D.

    2000-01-01

    Full Text Available The general design problem in serial production lines concerns the allocation of resources such as the number of servers, their service rates, and buffers given production-specific constraints, associated costs, and revenue projections. We describe the design of exPLOre: a modular, object-oriented, production line optimization software architecture. An abstract optimization module can be instantiated using a variety of stochastic optimization methods such as simulated annealing and genetic algorithms. Its search space is constrained by a constraint checker while its search direction is guided by a cost analyser which combines the output of a throughput evaluator with the business model. The throughput evaluator can be instantiated using Markovian, generalised queueing network methods, a decomposition, or an expansion method algorithm.

  17. VHDL Implementation of Feature-Extraction Algorithm for the PANDA Electromagnetic Calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Löhner, H.; Tambave, G.

    2010-01-01

    The feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The use of modified firmware with the running on-line

  18. An algorithm for gluinos on the lattice

    International Nuclear Information System (INIS)

    Montvay, I.

    1995-10-01

    Luescher's local bosonic algorithm for Monte Carlo simulations of quantum field theories with fermions is applied to the simulation of a possibly supersymmetric Yang-Mills theory with a Majorana fermion in the adjoint representation. Combined with a correction step in a two-step polynomial approximation scheme, the obtained algorithm seems to be promising and could be competitive with more conventional algorithms based on discretized classical (''molecular dynamics'') equations of motion. The application of the considered polynomial approximation scheme to optimized hopping parameter expansions is also discussed. (orig.)

  19. Hidden Markov Model Application to Transfer The Trader Online Forex Brokers

    Directory of Open Access Journals (Sweden)

    Farida Suharleni

    2012-05-01

    Full Text Available Hidden Markov Model is elaboration of Markov chain, which is applicable to cases that can’t directly observe. In this research, Hidden Markov Model is used to know trader’s transition to broker forex online. In Hidden Markov Model, observed state is observable part and hidden state is hidden part. Hidden Markov Model allows modeling system that contains interrelated observed state and hidden state. As observed state in trader’s transition to broker forex online is category 1, category 2, category 3, category 4, category 5 by condition of every broker forex online, whereas as hidden state is broker forex online Marketiva, Masterforex, Instaforex, FBS and Others. First step on application of Hidden Markov Model in this research is making construction model by making a probability of transition matrix (A from every broker forex online. Next step is making a probability of observation matrix (B by making conditional probability of five categories, that is category 1, category 2, category 3, category 4, category 5 by condition of every broker forex online and also need to determine an initial state probability (π from every broker forex online. The last step is using Viterbi algorithm to find hidden state sequences that is broker forex online sequences which is the most possible based on model and observed state that is the five categories. Application of Hidden Markov Model is done by making program with Viterbi algorithm using Delphi 7.0 software with observed state based on simulation data. Example: By the number of observation T = 5 and observed state sequences O = (2,4,3,5,1 is found hidden state sequences which the most possible with observed state O as following : where X1 = FBS, X2 = Masterforex, X3 = Marketiva, X4 = Others, and X5 = Instaforex.

  20. Feature Selection for Audio Surveillance in Urban Environment

    Directory of Open Access Journals (Sweden)

    KIKTOVA Eva

    2014-05-01

    Full Text Available This paper presents the work leading to the acoustic event detection system, which is designed to recognize two types of acoustic events (shot and breaking glass in urban environment. For this purpose, a huge front-end processing was performed for the effective parametric representation of an input sound. MFCC features and features computed during their extraction (MELSPEC and FBANK, then MPEG-7 audio descriptors and other temporal and spectral characteristics were extracted. High dimensional feature sets were created and in the next phase reduced by the mutual information based selection algorithms. Hidden Markov Model based classifier was applied and evaluated by the Viterbi decoding algorithm. Thus very effective feature sets were identified and also the less important features were found.