WorldWideScience

Sample records for huffman coding method

  1. Bounds on Generalized Huffman Codes

    CERN Document Server

    Baer, Michael B

    2007-01-01

    New lower and upper bounds are obtained for the compression of optimal binary prefix codes according to various nonlinear codeword length objectives. Like the coding bounds for Huffman coding - which concern the traditional linear code objective of minimizing average codeword length -- these are in terms of a form of entropy and the probability of the most probable input symbol. As in Huffman coding, some upper bounds can be found using sufficient conditions for the codeword corresponding to the most probable symbol being one bit long. Whereas having probability no less than 0.4 is a tight sufficient condition for this to be the case in Huffman coding, other penalties differ, some having a tighter condition, some a looser condition, and others having no such sufficient condition. The objectives explored here are ones for which optimal codes can be found using a generalized form of Huffman coding. These objectives include one related to queueing (an increasing exponential average), one related to single-shot c...

  2. Estimating the size of Huffman code preambles

    Science.gov (United States)

    Mceliece, R. J.; Palmatier, T. H.

    1993-01-01

    Data compression via block-adaptive Huffman coding is considered. The compressor consecutively processes blocks of N data symbols, estimates source statistics by computing the relative frequencies of each source symbol in the block, and then synthesizes a Huffman code based on these estimates. In order to let the decompressor know which Huffman code is being used, the compressor must begin the transmission of each compressed block with a short preamble or header file. This file is an encoding of the list n = (n(sub 1), n(sub 2)....,n(sub m)), where n(sub i) is the length of the Hufffman codeword associated with the ith source symbol. A simple method of doing this encoding is to individually encode each n(sub i) into a fixed-length binary word of length log(sub 2)l, where l is an a priori upper bound on the codeword length. This method produces a maximum preamble length of mlog(sub 2)l bits. The object is to show that, in most cases, no substantially shorter header of any kind is possible.

  3. Short Huffman Codes Producing 1s Half of the Time

    CERN Document Server

    Altenbach, Fabian; Mathar, Rudolf

    2011-01-01

    The design of the channel part of a digital communication system (e.g., error correction, modulation) is heavily based on the assumption that the data to be transmitted forms a fair bit stream. However, simple source encoders such as short Huffman codes generate bit streams that poorly match this assumption. As a result, the channel input distribution does not match the original design criteria. In this work, a simple method called half Huffman coding (halfHc) is developed. halfHc transforms a Huffman code into a source code whose output is more similar to a fair bit stream. This is achieved by permuting the codewords such that the frequency of 1s at the output is close to 0.5. The permutations are such that the optimality in terms of achieved compression ratio is preserved. halfHc is applied in a practical example, and the resulting overall system performs better than when conventional Huffman coding is used.

  4. Maximal codeword lengths in Huffman codes

    Science.gov (United States)

    Abu-Mostafa, Y. S.; Mceliece, R. J.

    1992-01-01

    The following question about Huffman coding, which is an important technique for compressing data from a discrete source, is considered. If p is the smallest source probability, how long, in terms of p, can the longest Huffman codeword be? It is shown that if p is in the range 0 less than p less than or equal to 1/2, and if K is the unique index such that 1/F(sub K+3) less than p less than or equal to 1/F(sub K+2), where F(sub K) denotes the Kth Fibonacci number, then the longest Huffman codeword for a source whose least probability is p is at most K, and no better bound is possible. Asymptotically, this implies the surprising fact that for small values of p, a Huffman code's longest codeword can be as much as 44 percent larger than that of the corresponding Shannon code.

  5. Difference-Huffman Coding of Multidimensional Databases

    CERN Document Server

    Szépkúti, István

    2011-01-01

    A new compression method called difference-Huffman coding (DHC) is introduced in this paper. It is verified empirically that DHC results in a smaller multidimensional physical representation than those for other previously published techniques (single count header compression, logical position compression, base-offset compression and difference sequence compression). The article examines how caching influences the expected retrieval time of the multidimensional and table representations of relations. A model is proposed for this, which is then verified with empirical data. Conclusions are drawn, based on the model and the experiment, about when one physical representation outperforms another in terms of retrieval time. Over the tested range of available memory, the performance for the multidimensional representation was always much quicker than for the table representation.

  6. A quantum analog of Huffman coding

    CERN Document Server

    Braunstein, S L; Gottesman, D; Lo, H K; Braunstein, Samuel L.; Fuchs, Christopher A.; Gottesman, Daniel; Lo, Hoi-Kwong

    1998-01-01

    We analyse a generalization of Huffman coding to the quantum case. In particular, we notice various difficulties in using instantaneous codes for quantum communication. However, for the storage of quantum information, we have succeeded in constructing a Huffman-coding inspired quantum scheme. The number of computational steps in the encoding and decoding processes of N quantum signals can be made to be polynomial in log N by a massively parallel implementation of a quantum gate array. This is to be compared with the N^3 computational steps required in the sequential implementation by Cleve and DiVincenzo of the well-known quantum noiseless block coding scheme by Schumacher. The powers and limitations in using this scheme in communication are also discussed.

  7. Loss less DNA Solidity Using Huffman and Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Lakshmi Mythri Dasari

    2014-07-01

    Full Text Available DNA Sequences making up any bacterium comprise the blue print of that bacterium so that understanding and analyzing different genes with in sequences has become an exceptionally significant mission. Naturalists are manufacturing huge volumes of DNA Sequences every day that makes genome sequence catalogue emergent exponentially. The data bases such as Gen-bank represents millions of DNA Sequences filling many thousands of gigabytes workstation storing capability. Solidity of Genomic sequences can decrease the storage requirements, and increase the broadcast speed. In this paper we compare two lossless solidity algorithms (Huffman and Arithmetic coding. In Huffman coding, individual bases are coded and assigned a specific binary number. But for Arithmetic coding entire DNA is coded in to a single fraction number and binary word is coded to it. Solidity ratio is compared for both the methods and finally we conclude that arithmetic coding is the best.

  8. Sequential adaptive compressed sampling via Huffman codes

    CERN Document Server

    Aldroubi, Akram; Zarringhalam, Kourosh

    2008-01-01

    There are two main approaches in compressed sensing: the geometric approach and the combinatorial approach. In this paper we introduce an information theoretic approach and use results from the theory of Huffman codes to construct a sequence of binary sampling vectors to determine a sparse signal. Unlike other approaches, our approach is adaptive in the sense that each sampling vector depends on the previous sample. The number of measurements we need for a k-sparse vector in n-dimensional space is no more than O(k log n) and the reconstruction is O(k).

  9. Ternary Tree and Clustering Based Huffman Coding Algorithm

    Directory of Open Access Journals (Sweden)

    Pushpa R. Suri

    2010-09-01

    Full Text Available In this study, the focus was on the use of ternary tree over binary tree. Here, a new two pass Algorithm for encoding Huffman ternary tree codes was implemented. In this algorithm we tried to find out the codeword length of the symbol. Here I used the concept of Huffman encoding. Huffman encoding was a two pass problem. Here the first pass was to collect the letter frequencies. You need to use that information to create the Huffman tree. Note that char values range from -128 to 127, so you will need to cast them. I stored the data as unsigned chars to solve this problem, and then the range is 0 to 255. Open the output file and write the frequency table to it. Open the input file, read characters from it, gets the codes, and writes the encoding into the output file. Once a Huffman code has been generated, data may be encoded simply by replacing each symbol with its code. To reduce the memory size and fasten the process of finding the codeword length for a symbol in a Huffman tree, we proposed a memory efficient data structure to represent the codeword length of Huffman ternary tree. In this algorithm we tried to find out the length of the code of the symbols used in the tree.

  10. Entropy-Based Bounds On Redundancies Of Huffman Codes

    Science.gov (United States)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  11. A Dynamic Programming Approach To Length-Limited Huffman Coding

    CERN Document Server

    Golin, Mordecai

    2008-01-01

    The ``state-of-the-art'' in Length Limited Huffman Coding algorithms is the $\\Theta(ND)$-time, $\\Theta(N)$-space one of Hirschberg and Larmore, where $D\\le N$ is the length restriction on the code. This is a very clever, very problem specific, technique. In this note we show that there is a simple Dynamic-Programming (DP) method that solves the problem with the same time and space bounds. The fact that there was an $\\Theta(ND)$ time DP algorithm was previously known; it is a straightforward DP with the Monge property (which permits an order of magnitude speedup). It was not interesting, though, because it also required $\\Theta(ND)$ space. The main result of this paper is the technique developed for reducing the space. It is quite simple and applicable to many other problems modeled by DPs with the Monge property. We illustrate this with examples from web-proxy design and wireless mobile paging.

  12. An Upper Limit of AC Huffman Code Length in JPEG Compression

    OpenAIRE

    Horie, Kenichi

    2009-01-01

    A strategy for computing upper code-length limits of AC Huffman codes for an 8x8 block in JPEG Baseline coding is developed. The method is based on a geometric interpretation of the DCT, and the calculated limits are as close as 14% to the maximum code-lengths. The proposed strategy can be adapted to other transform coding methods, e.g., MPEG 2 and 4 video compressions, to calculate close upper code length limits for the respective processing blocks.

  13. Analysis of LAPAN-IPB image lossless compression using differential pulse code modulation and huffman coding

    Science.gov (United States)

    Hakim, P. R.; Permala, R.

    2017-01-01

    LAPAN-A3/IPB satellite is the latest Indonesian experimental microsatellite with remote sensing and earth surveillance missions. The satellite has three optical payloads, which are multispectral push-broom imager, digital matrix camera and video camera. To increase data transmission efficiency, the multispectral imager data can be compressed using either lossy or lossless compression method. This paper aims to analyze Differential Pulse Code Modulation (DPCM) method and Huffman coding that are used in LAPAN-IPB satellite image lossless compression. Based on several simulation and analysis that have been done, current LAPAN-IPB lossless compression algorithm has moderate performance. There are several aspects that can be improved from current configuration, which are the type of DPCM code used, the type of Huffman entropy-coding scheme, and the use of sub-image compression method. The key result of this research shows that at least two neighboring pixels should be used for DPCM calculation to increase compression performance. Meanwhile, varying Huffman tables with sub-image approach could also increase the performance if on-board computer can support for more complicated algorithm. These results can be used as references in designing Payload Data Handling System (PDHS) for an upcoming LAPAN-A4 satellite.

  14. Canonical Huffman code based full-text index

    Institute of Scientific and Technical Information of China (English)

    Yi Zhang; Zhili Pei; Jinhui Yang; Yanchun Liang

    2008-01-01

    Full-text indices are data structures that can be used to find any substring of a given string. Many full-text indices require space larger than the original string. In this paper, we introduce the canonical Huffman code to the wavelet tree of a string T[1...n]. Compared with Huffman code based wavelet tree, the memory space used to represent the shape of wavelet tree is not needed. In case of large alphabet, this part of memory is not negligible. The operations of wavelet tree are also simpler and more efficient due to the canonical Huffman code. Based on the resulting structure, the multi-key rank and select functions can be performed using at most nH0 + |X|(lglgn + lgn - lg|Σ|)+O(nH0) bits and in O(H0) time for average cases, where H0 is the zeroth order empirical entropy of T. In the end, we present an efficient construction algorithm for this index, which is on-line and linear.

  15. M-ary Anti - Uniform Huffman Codes for Infinite Sources With Geometric Distribution

    OpenAIRE

    Tarniceriu, Daniela; Munteanu, Valeriu; Zaharia, Gheorghe,

    2013-01-01

    International audience; In this paper we consider the class of generalized antiuniform Huffman (AUH) codes for sources with infinite alphabet and geometric distribution. This distribution leads to infinite anti- uniform sources for some ranges of its parameters. Huffman coding of these sources results in AUH codes. We perform a generalization of binary Huffman encoding, using a M-letter code alphabet and prove that as a result of this encoding, sources with memory are obtained. For these sour...

  16. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  17. Research of Data Compression Method Based on the Improved Huffman Code Algorithm%基于改进哈夫曼编码的数据压缩方法研究

    Institute of Scientific and Technical Information of China (English)

    张红军; 徐超

    2014-01-01

    As a non-losing compressing coding algorithm, Huffman coding has many important application to the current data compression field.The classic algorithm to get Huffman coding is from bottom to top on the basis of the Huffman tree. This paper gives an improved Huffman algorithm of data compression by the analysis of the Huffman algorithm, in which algorithm go from the root node to leaf nodes of the Huffman tree by using the queue structure.In the coding process, every leaf node is only scanned once before getting the Huffman coding.The experimental result shows the fact that the improved algorithm not only the compression ratio is higher than classic algorithm, but also ensure the security and confidentiality of the resulting compressed.%作为一种无损压缩编码方法,哈夫曼编码在数据压缩中具有重要的应用。经典的哈夫曼编码是在构造哈夫曼的基础上自下而上进行的,通过分析哈夫曼算法的思想,给出了一种改进的哈夫曼数据压缩算法。该算法利用队列结构,从哈夫曼的根节点出发,向叶子节点进行编码,在编码过程中仅将哈夫曼树的每个叶子节点进行一次扫描便可以得到各个叶子节点的哈夫曼编码。实验表明,改进算法不仅压缩率高于以往算法,而且保证了最终生成的压缩文件的安全性。

  18. On constructing symmetrical reversible variable-length codes independent of the Huffman code

    Institute of Scientific and Technical Information of China (English)

    HUO Jun-yan; CHANG Yi-lin; MA Lin-hua; LUO Zhong

    2006-01-01

    Reversible variable length codes (RVLCs) have received much attention due to their excellent error resilient capabilities. In this paper, a novel construction algorithm for symmetrical RVLC is proposed which is independent of the Huffman code. The proposed algorithm's codeword assignment is only based on symbol occurrence probability. It has many advantages over symmetrical construction algorithms available for easy realization and better code performance. In addition, the proposed algorithm simplifies the codeword selection mechanism dramatically.

  19. JOINT SOURCE-CHANNEL DECODING OF HUFFMAN CODES WITH LDPC CODES

    Institute of Scientific and Technical Information of China (English)

    Mei Zhonghui; Wu Lenan

    2006-01-01

    In this paper, we present a Joint Source-Channel Decoding algorithm (JSCD) for Low-Density Parity Check (LDPC) codes by modifying the Sum-Product Algorithm (SPA) to account for the source redundancy, which results from the neighbouring Huffman coded bits. Simulations demonstrate that in the presence of source redundancy, the proposed algorithm gives better performance than the Separate Source and Channel Decoding algorithm (SSCD).

  20. A novel DNA sequence similarity calculation based on simplified pulse-coupled neural network and Huffman coding

    Science.gov (United States)

    Jin, Xin; Nie, Rencan; Zhou, Dongming; Yao, Shaowen; Chen, Yanyan; Yu, Jiefu; Wang, Quan

    2016-11-01

    A novel method for the calculation of DNA sequence similarity is proposed based on simplified pulse-coupled neural network (S-PCNN) and Huffman coding. In this study, we propose a coding method based on Huffman coding, where the triplet code was used as a code bit to transform DNA sequence into numerical sequence. The proposed method uses the firing characters of S-PCNN neurons in DNA sequence to extract features. Besides, the proposed method can deal with different lengths of DNA sequences. First, according to the characteristics of S-PCNN and the DNA primary sequence, the latter is encoded using Huffman coding method, and then using the former, the oscillation time sequence (OTS) of the encoded DNA sequence is extracted. Simultaneously, relevant features are obtained, and finally the similarities or dissimilarities of the DNA sequences are determined by Euclidean distance. In order to verify the accuracy of this method, different data sets were used for testing. The experimental results show that the proposed method is effective.

  1. Load Balancing Scheme on the Basis of Huffman Coding for P2P Information Retrieval

    Science.gov (United States)

    Kurasawa, Hisashi; Takasu, Atsuhiro; Adachi, Jun

    Although a distributed index on a distributed hash table (DHT) enables efficient document query processing in Peer-to-Peer information retrieval (P2P IR), the index costs a lot to construct and it tends to be an unfair management because of the unbalanced term frequency distribution. We devised a new distributed index, named Huffman-DHT, for P2P IR. The new index uses an algorithm similar to Huffman coding with a modification to the DHT structure based on the term distribution. In a Huffman-DHT, a frequent term is assigned to a short ID and allocated a large space in the node ID space in DHT. Throuth ID management, the Huffman-DHT balances the index registration accesses among peers and reduces load concentrations. Huffman-DHT is the first approach to adapt concepts of coding theory and term frequency distribution to load balancing. We evaluated this approach in experiments using a document collection and assessed its load balancing capabilities in P2P IR. The experimental results indicated that it is most effective when the P2P system consists of about 30, 000 nodes and contains many documents. Moreover, we proved that we can construct a Huffman-DHT easily by estimating the probability distribution of the term occurrence from a small number of sample documents.

  2. AN APPLICATION OF PLANAR BINARY BITREES TO PREFIX AND HUFFMAN PREFIX CODE

    OpenAIRE

    Erjavec, Zlatko

    2004-01-01

    In this paper we construct prefix code in which the use of planar binary trees is replaced by the use of the planar binary bitrees. In addition, we apply the planar binary bitrees to the Huffman prefix code. Finally, we code English alphabet in such a way that characters have codewords different from already established ones.

  3. Huffman Coding with Letter Costs: A Linear-Time Approximation Scheme

    OpenAIRE

    Golin, Mordecai; Mathieu, Claire; Young, Neal E.

    2002-01-01

    We give a polynomial-time approximation scheme for the generalization of Huffman Coding in which codeword letters have non-uniform costs (as in Morse code, where the dash is twice as long as the dot). The algorithm computes a (1+epsilon)-approximate solution in time O(n + f(epsilon) log^3 n), where n is the input size.

  4. Grassmannian Beamforming for MIMO-OFDM Systems with Frequency and Spatially Correlated Channels Using Huffman Coding

    CERN Document Server

    Gutman, Igor; Wulich, Dov

    2009-01-01

    Multiple input multiple output (MIMO) precoding is an efficient scheme that may significantly enhance the communication link. However, this enhancement comes with a cost. Many precoding schemes require channel knowledge at the transmitter that is obtained through feedback from the receiver. Focusing on the natural common fusion of orthogonal frequency division multiplexing (OFDM) and MIMO, we exploit the channel correlation in the frequency and spatial domain to reduce the required feedback rate in a frequency division duplex (FDD) system. The proposed feedback method is based on Huffman coding and is employed here for the single stream case. The method leads to a significant reduction in the required feedback rate, without any loss in performance. The proposed method may be extended to the multi-stream case.

  5. Design and performance of Huffman sequences in medical ultrasound coded excitation.

    Science.gov (United States)

    Polpetta, Alessandro; Banelli, Paolo

    2012-04-01

    This paper deals with coded-excitation techniques for ultrasound medical echography. Specifically, linear Huffman coding is proposed as an alternative approach to other widely established techniques, such as complementary Golay coding and linear frequency modulation. The code design is guided by an optimization procedure that boosts the signal-to-noise ratio gain (GSNR) and, interestingly, also makes the code robust in pulsed-Doppler applications. The paper capitalizes on a thorough analytical model that can be used to design any linear coded-excitation system. This model highlights that the performance in frequency-dependent attenuating media mostly depends on the pulse-shaping waveform when the codes are characterized by almost ideal (i.e., Kronecker delta) autocorrelation. In this framework, different pulse shapers and different code lengths are considered to identify coded signals that optimize the contrast resolution at the output of the receiver pulse compression. Computer simulations confirm that the proposed Huffman codes are particularly effective, and that there are scenarios in which they may be preferable to the other established approaches, both in attenuating and non-attenuating media. Specifically, for a single scatterer at 150 mm in a 0.7-dB/(MHz·cm) attenuating medium, the proposed Huffman design achieves a main-to-side lobe ratio (MSR) equal to 65 dB, whereas tapered linear frequency modulation and classical complementary Golay codes achieve 35 and 45 dB, respectively.

  6. Huffman编码在矢量地图压缩中的应用%Huffman Coding and Applications in Compression for Vector Maps

    Institute of Scientific and Technical Information of China (English)

    刘兴科; 陈轲; 于晓光

    2014-01-01

    Huffman 编码是一种统计编码,是数据无损压缩中的重要方法。本文研究了Huffman编码的原理及其实现,并将其应用于矢量地图数据的压缩。针对矢量地图数据的特点,提出了Huffman编码的具体算法及压缩与解压缩的实现步骤,讨论了算法用于压缩矢量地图的优良性质。通过试验展示了Huffman编码进行数据压缩的原理与实现过程,并利用一组真实的矢量地图数据验证了所提出的算法可以有效实现对矢量地图数据的压缩,具有无损、高效、压缩率高、通用性好的优点。%Huffman coding is a statistical coding method and widely used in lossless compression. The principal and implementation of Huffman coding was studied and the compression of vector maps was implemented with Huffman coding. Considering the characteristics of the vector maps the detailed algorithm of Huffman coding and the steps of compression and decompression was proposed and the property of the algorithm in vector map compression was discussed. The principle and process of Huffman coding was shown with an experiment. It is demonstrated with ex-periments using a set of real vector maps that the proposed algorithm was a lossless compression method with high efficiency high compression ratio and perfect generality.

  7. Analysis and Research on Adaptive Huffman Coding%自适应Huffman编码算法分析及研究

    Institute of Scientific and Technical Information of China (English)

    彭文艺

    2012-01-01

    Huffman编码作为一种高效而简单的可变长编码常用于信源编码.但现有的Huffman编码算法存在效率不高,同时应用受到一些限制,因此,提出一种自适应Huffman编码算法,该算法与其他的Huffman编码相比效率更高,应用范围更广.%Huffman coding, as an efficient and simple variable length coding, is used in source coding. But the existing Huffman coding algorithm efficiency is not high, but application is also limited, therefore, this paper proposes an adaptive Huffman coding algorithm, this algorithm with other Huffman encoding compared with high efficiency, wide application range.

  8. Tight Bounds on the Average Length, Entropy, and Redundancy of Anti-Uniform Huffman Codes

    CERN Document Server

    Mohajer, Soheil

    2007-01-01

    In this paper we consider the class of anti-uniform Huffman codes and derive tight lower and upper bounds on the average length, entropy, and redundancy of such codes in terms of the alphabet size of the source. The Fibonacci distributions are introduced which play a fundamental role in AUH codes. It is shown that such distributions maximize the average length and the entropy of the code for a given alphabet size. Another previously known bound on the entropy for given average length follows immediately from our results.

  9. The number of Huffman codes, compact trees, and sums of unit fractions

    CERN Document Server

    Elsholtz, Christian; Prodinger, Helmut

    2011-01-01

    The number of "nonequivalent" Huffman codes of length r over an alphabet of size t has been studied frequently. Equivalently, the number of "nonequivalent" complete t-ary trees has been examined. We first survey the literature, unifying several independent approaches to the problem. Then, improving on earlier work we prove a very precise asymptotic result on the counting function, consisting of two main terms and an error term.

  10. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    Science.gov (United States)

    Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.

    2013-08-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.

  11. Writing on the Facade of RWTH ICT Cubes: Cost Constrained Geometric Huffman Coding

    CERN Document Server

    Böcherer, Georg; Malsbender, Martina; Mathar, Rudolf

    2011-01-01

    In this work, a coding technique called cost constrained Geometric Huffman coding (ccGhc) is developed. ccGhc minimizes the Kullback-Leibler distance between a dyadic probability mass function (pmf) and a target pmf subject to an affine inequality constraint. An analytical proof is given that when ccGhc is applied to blocks of symbols, the optimum is asymptotically achieved when the blocklength goes to infinity. The derivation of ccGhc is motivated by the problem of encoding a text to a sequence of slats subject to architectural design criteria. For the considered architectural problem, for a blocklength of 3, the codes found by ccGhc match the design criteria. For communications channels with average cost constraints, ccGhc can be used to efficiently find prefix-free modulation codes that are provably capacity achieving.

  12. Improved Huffman coding-based data transmission and compression method for agricultural machinery operation%基于改进Huffman编码的农机作业数据传输压缩方法

    Institute of Scientific and Technical Information of China (English)

    杨敬锋; 张南峰; 李勇; 薛月菊; 吕伟; 何堃

    2014-01-01

    为解决通讯环境较差的农业机械作业状态数据的传输难题,该文提出了基于改进Huffman编码技术的数据压缩方法实现数据的压缩、传输、解析与解压。数据压缩与解压测试的结果表明,数据采集周期为5 s、数据长度为918.38 kb时,基于改进Huffman算法压缩的数据长度为412.56 kb,同样条件下对比传统Huffman算法压缩的数据长度498.56 kb小86 kb,压缩率从传统Huffman算法的45.71%提升至改进Huffman算法的55.08%;传统Huffman算法中数据传输出错率和数据传输丢包率为2.47%和4.18%,而在同样传输要求下的筛选压缩传输中数据传输出错率和数据传输丢包率降至2.06%和0.78%。该方法能满足农业机械作业状态数据压缩传输要求,在单个数据包数据较少、传输时间短的压缩传输方式下能够获得较低的传输出错率和丢包率,且该方法具有计算量少、压缩效率较高特点,适合在农业机械作业区域进行数据传输。%In order to solve the poor communication environment problem of agricultural machinery operation state data transmission caused by the unbalanced coverage of a mobile communication base station, a data filtering and data compression method based on an improved Huffman coding technique was proposed for data selecting, compression, transmission, parsing, and extracting in this paper. First, the agricultural machinery operation data types, exchange mode, and compression mode were defined. Then, data collection and exchange were realized based on a Compass/GPS dual-mode state data collection terminal. Finally, an improved Huffman coding technique was proposed. At present, most of the data transmission is using a compression-decompression method to ensure data integrity of data transmission, which can reduce the data traffic and save many communication costs, but its disadvantages are also obvious. The disadvantages are fewer on the terminal in the data

  13. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    Science.gov (United States)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-08-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  14. Efficient Data Compression Scheme using Dynamic Huffman Code Applied on Arabic Language

    Directory of Open Access Journals (Sweden)

    Sameh Ghwanmeh

    2006-01-01

    Full Text Available The development of an efficient compression scheme to process the Arabic language represents a difficult task. This paper employs the dynamic Huffman coding on data compression with variable length bit coding, on the Arabic language. Experimental tests have been performed on both Arabic and English text. A comparison was made to measure the efficiency of compressing data results on both Arabic and English text. Also a comparison was made between the compression rate and the size of the file to be compressed. It has been found that as the file size increases, the compression ratio decreases for both Arabic and English text. The experimental results show that the average message length and the efficiency of compression on Arabic text was better than the compression on English text. Also, results show that the main factor which significantly affects compression ratio and average message length was the frequency of the symbols on the text.

  15. Compression and Encryption of ECG Signal Using Wavelet and Chaotically Huffman Code in Telemedicine Application.

    Science.gov (United States)

    Raeiatibanadkooki, Mahsa; Quchani, Saeed Rahati; KhalilZade, MohammadMahdi; Bahaadinbeigy, Kambiz

    2016-03-01

    In mobile health care monitoring, compression is an essential tool for solving storage and transmission problems. The important issue is able to recover the original signal from the compressed signal. The main purpose of this paper is compressing the ECG signal with no loss of essential data and also encrypting the signal to keep it confidential from everyone, except for physicians. In this paper, mobile processors are used and there is no need for any computers to serve this purpose. After initial preprocessing such as removal of the baseline noise, Gaussian noise, peak detection and determination of heart rate, the ECG signal is compressed. In compression stage, after 3 steps of wavelet transform (db04), thresholding techniques are used. Then, Huffman coding with chaos for compression and encryption of the ECG signal are used. The compression rates of proposed algorithm is 97.72 %. Then, the ECG signals are sent to a telemedicine center to acquire specialist diagnosis by TCP/IP protocol.

  16. 基于Huffman编码的XML数据压缩方法%Design and application of an XML data compression algorithm based on Huffman coding

    Institute of Scientific and Technical Information of China (English)

    施鹏; 李敏; 于涛; 赵利强; 王建林

    2013-01-01

    An XML data compression method based on Huffman coding has been proposed for the problem where the accessing rate of a production process report system for a large data source is not high in a certain bandwidth.A data processing class was constructed for XML documents to get a high rate word units in this algorithm.With the help of Huffman coding to code specific unit words,the coded document was compressed by the LZMA compression algorithm.The problem of needing the assistance of the document type definition and XML parser in the traditional XML data compression algorithm was solved using this algorithm,which resulted in a good compression effect.The Huffman-LZMA compression algorithm was constructed and was applied to the production process report system design.The experimental compression ratio of the report data reached about 88%.The bandwidth and storage space were saved effectively,and the report accessing rate was improved.%针对一定网络带宽下生产过程报表系统对大型数据源访问速率不高的问题,提出了一种基于Huffman编码的XML数据压缩方法.通过构造数据处理类获取XML文档中重复率高的节点单元,采用Huff man编码对节点单元进行编码,将编码后文档利用LZMA算法压缩,构建了Huffman-LZMA压缩算法,并将该压缩算法应用于生产过程报表系统设计.实际应用结果表明,该压缩算法对生产过程报表数据源的压缩率达到约88%,有效的节省了网络带宽和存储空间,提高了报表系统的访问速率.

  17. 利用改进的哈夫曼编码实现文件的压缩与解压%Using the Improved Huffman Code to Realize Compression and Decompression of the Document

    Institute of Scientific and Technical Information of China (English)

    卢冰; 刘兴海

    2013-01-01

      通过分析哈夫曼算法的思想,提出了一种改进的哈夫曼数据压缩算法。针对经典哈夫曼算法的不足,采用堆排序的思想构建哈夫曼树并得到哈夫曼编码,这种方法可以减少内存的读写次数,提高系统的响应时间。通过二次映射,把编码文件中每8位二进制转换成一个对应字符,提高了文件的压缩率,保证了最终生成的压缩文件的安全保密性。本文最后采用3个文本文件对改进的哈夫曼算法进行了压缩测试,实验表明,改进的算法,在压缩率上略强于经典算法。%  Through the analysis of the Huffman algorithm, an improved Huffman algorithm of data compression is pro-posed. According to the classic Huffman algorithm, using the heap sort thought to build the Huffman tree and the Huff-man coding, this method can reduce the memory read and write times,improving the system response time. Through the second mapping, each 8 encoded file binary is converted into a corresponding character, improve the compression ratio of files and ensure the security and confidentiality of the resulting compressed file. Finally, three text files compression test on the improved Huffman algorithm, experiments show that the improved algorithm, the compression ratio is slightly bet-ter than classic algorithm.

  18. 改进的赫夫曼树(Huffman Tree)和赫夫曼编码(Huffman Code)构造算法

    Institute of Scientific and Technical Information of China (English)

    刘帮涛; 罗敏

    2008-01-01

    通过将待排序的数据应用快速排序算法进行排序处理,使得赫夫曼算法(Huffman Algorithm)的时同复杂度从O(n2降低为O(n*log2n).当用于构造赫夫曼树(Huffman Tree)的结点比较多时,可较大的提高程序的运行时间.

  19. Ternary Tree and Memory-Efficient Huffman Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Pushpa R. Suri

    2011-01-01

    Full Text Available In this study, the focus was on the use of ternary tree over binary tree. Here, a new one pass Algorithm for Decoding adaptive Huffman ternary tree codes was implemented. To reduce the memory size and fasten the process of searching for a symbol in a Huffman tree, we exploited the property of the encoded symbols and proposed a memory efficient data structure to represent the codeword length of Huffman ternary tree. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method.

  20. A high capacity MP3 steganography based on Huffman coding%基于Huffman编码的大容量MP3隐写算法

    Institute of Scientific and Technical Information of China (English)

    严迪群; 王让定; 张力光

    2011-01-01

    A high capacity steganography method for mp3 audios is proposed in this paper. According to the characteristic of Huffman coding, the code words in Huffman tables are first classified to ensure that the embedding operation does not change the bitstream structure in MP3 standard. Then secret data are embedded by replacing the corresponding code words. The embedding strategy is based on multiple-base nation system. The structure of bit stream and the size of the cover audio can be kept unchanged after embedding. The results show that the proposed method can obtain higher hiding capacity and better effi ciency than that of the method under binary case. Furthermore, the imperceptibility can also be better maintained in our method.%本文针对MP3编码标准中哈夫曼码字对特点,提出了一种借助码字替换实现秘密信息隐写的新算法.该算法首先对哈夫曼码表中的码字进行分类,以保证替换操作不改变MP3码流的固定结构,再借鉴混合进制的概念,采用多进制方式隐藏秘密信息.给出了算法在二进制和多进制两种模式下的仿真结果,表明多进制隐写模式可以获得更高的隐写速率和效率,同时算法的感知透明性也能得到较好保持.

  1. Research on Packet Marking Algorithm Based on Huffman Code%基于 Huffman 编码的包标记算法研究

    Institute of Scientific and Technical Information of China (English)

    李明珍; 覃运初; 唐凤仙

    2015-01-01

    防范DDoS攻击的关键在于攻击源的定位,包标记是攻击源定位技术研究的热点。针对传统概率包标记存在的问题,提出选择IPv4数据报首部的选项字段作为标记区域,采用Huffman编码压缩标记信息,减少路径重构时所需标记包的数量;利用IPv6的隧道模式,在IPv4到IPv6网络时增加一个复制操作,将标记信息转存到IPv6的hop-by-hop字段,增加改进算法的适用范围。实验结果表明,改进算法快速、准确和高效,只需一个数据报即可完成路径重构,适用于IPv4和IPv6网络。%The key to prevent DDoS attacks is locating attack source , and packet marking is the hot spot of attack source locating technology .Aiming at the problems of packet marking , an improved algorithm is proposed . The improved algorithm chooses option field of IPv 4 datagram header as the marking area and uses Huffman code to reduce the number of marked packets during path reconstruction .Packets pass from IPv4 network to IPv6 network, adding a copy operation to copy marking information to IPv 6 extension header of hop -by-hop.Thus, it increases the application scope .The experimental results show that the improved algorithm is rapid , accurate and efficient .It can complete path reconstruction only needing a datagram , which can be applied to IPv 4 and IPv6 network .

  2. Channel Efficiency with Security Enhancement for Remote Condition Monitoring of Multi Machine System Using Hybrid Huffman Coding

    Science.gov (United States)

    Datta, Jinia; Chowdhuri, Sumana; Bera, Jitendranath

    2016-12-01

    This paper presents a novel scheme of remote condition monitoring of multi machine system where a secured and coded data of induction machine with different parameters is communicated between a state-of-the-art dedicated hardware Units (DHU) installed at the machine terminal and a centralized PC based machine data management (MDM) software. The DHUs are built for acquisition of different parameters from the respective machines, and hence are placed at their nearby panels in order to acquire different parameters cost effectively during their running condition. The MDM software collects these data through a communication channel where all the DHUs are networked using RS485 protocol. Before transmitting, the parameter's related data is modified with the adoption of differential pulse coded modulation (DPCM) and Huffman coding technique. It is further encrypted with a private key where different keys are used for different DHUs. In this way a data security scheme is adopted during its passage through the communication channel in order to avoid any third party attack into the channel. The hybrid mode of DPCM and Huffman coding is chosen to reduce the data packet length. A MATLAB based simulation and its practical implementation using DHUs at three machine terminals (one healthy three phase, one healthy single phase and one faulty three phase machine) proves its efficacy and usefulness for condition based maintenance of multi machine system. The data at the central control room are decrypted and decoded using MDM software. In this work it is observed that Chanel efficiency with respect to different parameter measurements has been increased very much.

  3. 动态哈夫曼算法在电力线计算机网络数据压缩中的应用%Applications of Dynamic Huffman Code Algorithms in the Data Compression of Power-Line Computer Network

    Institute of Scientific and Technical Information of China (English)

    黄荣辉; 周明天; 曾家智

    2000-01-01

    This thesis is to analyze the characteristics of data packets in power-line computer network and to discuss a data compression method of present study in abroad.Briefly describing the different Huffman code algorithms,it presents the data compression results by testingg the data packets in power-line computer network.The result shows that it is better to use the Advanced Dynamic Huffman Code method in power-line computer network.Finally,the methods of improving operation in engineering are proposed.

  4. An Empirical Evaluation of Coding Methods for Multi-Symbol Alphabets.

    Science.gov (United States)

    Moffat, Alistair; And Others

    1994-01-01

    Evaluates the performance of different methods of data compression coding in several situations. Huffman's code, arithmetic coding, fixed codes, fast approximations to arithmetic coding, and splay coding are discussed in terms of their speed, memory requirements, and proximity to optimal performance. Recommendations for the best methods of…

  5. On adaptive Huffman coding based on Look-up table%基于查找表的自适应Huffman编码算法

    Institute of Scientific and Technical Information of China (English)

    雒莎; 葛海波

    2011-01-01

    Considering that the existing Huffman coding algorithms are not efficient, an adaptive Huffman coding algorithm based on look-up table is proposed, which encodes the data according as the dynamic ta- bles are changing. By this algorithm, the first character is encoded to the code words of "KEY" firstly, and then, "KEY" is moved down until a new character turns up. Compared with others, the proposed algorithm can make Huffman coding run more efficiently.%Huffman压缩编码作为一种高效而简单的可变长编码而被广泛应用于信源编码。但现有的Huffman编码算法普遍存在着效率不高的问题,因此,提出一种自适应查找表Huffman编码算法。该算法对数据进行编码的依据是动态变化的表,对于首次出现的字符使用“KEY”的码字进行编码,将“KEY”下移,等待下一个首次出现的字符。与其他算法相比,改进算法Huffman编码的效率得以提高。

  6. 与Huffman码相结合的卷积码软判决译码方案%Soft Decoding Scheme of Convolution Code Combined with Huffman Coding

    Institute of Scientific and Technical Information of China (English)

    郭东亮; 陈小蔷; 吴乐南

    2002-01-01

    This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman coding and use it to compute the a priori source information which can be used when the channel environment is bad. The suggested scheme does not require changes on the transmitter side. Compared with separate decoding systems, the gain in signal to noise ratio is about 0.5-1.0 dB with a limited added complexity. Simulation results show that the suggested algorithm is effective.%提出了一种与Huffman码相结合的卷积码软判决译码方案.对卷积码的软判决维特比译码算法进行了改进,由Huffman编码的码字概率计算出比特转移概率,进而得出与维特比译码的支路似然值相对应的信源先验信息,通信系统的编码端不作改动,当由于信道条件恶化等原因造成维特比译码算法的支路量度相差很小而难以进行可靠译码时,将信源先验信息作为支路量度的修正值,以改善译码的性能.与分离的信源、信道译码相比,性能增益约为0.5~1.0?dB,增加的复杂性很小.仿真实验验证了算法的有效性.

  7. 一种改进的Huffman编码技术增加QR码的信息容量%An lmproved Huffman Coding to lncrease lnformation Capacity in QR Code

    Institute of Scientific and Technical Information of China (English)

    邹敏; 张瑞林; 吴桐树; 王啸

    2015-01-01

    QR码用于存储信息,很容易受存储容量的限制。针对QR码存储容量较低的缺点,提出了一种改进的Huffman编码来扩大QR码的信息容量。首先,对编码数据采用希尔排序,构造Huffman树得到Huffman编码,并将编码后的数据进行QR的编码,从而得到数据压缩后的QR码。然后,对QR码扫描译码时,利用Huffman树的编码性质对QR码译码后的数据进行解码,从而得到被压缩编码后的原始数据。实验结果表明:该算法能够增加QR码的信息存储容量。%QR codes are used to store information,easily restricted by storage capacity.According to the QR code storage capacity is relatively low.This paper presents a kind of expanding the information capacity of QR codes by using improved Huffman code.First,for the coding data using the shel sort structure Huffman tree to generate Huffman code,coding them to QR code,then get the data of compressed QR code.Secondary,When scanning and decoding of QR code.With the property of the Huffman tree to decode the QR codes,thus get the raw data compressed and encoded.The experimental results show that the algorithm can increase the information storage capacity of QR codes.

  8. Modified 8×8 quantization table and Huffman encoding steganography

    Science.gov (United States)

    Guo, Yongning; Sun, Shuliang

    2014-10-01

    A new secure steganography, which is based on Huffman encoding and modified quantized discrete cosine transform (DCT) coefficients, is provided in this paper. Firstly, the cover image is segmented into 8×8 blocks and modified DCT transformation is applied on each block. Huffman encoding is applied to code the secret image before embedding. DCT coefficients are quantized by modified quantization table. Inverse DCT(IDCT) is conducted on each block. All the blocks are combined together and the steg image is finally achieved. The experiment shows that the proposed method is better than DCT and Mahender Singh's in PSNR and Capacity.

  9. Modified adaptive Huffman coding algorithm for wireless sensor network%无线传感网改进型自适应Huffman编码算法

    Institute of Scientific and Technical Information of China (English)

    许磊; 李千目; 朱保平

    2013-01-01

    为压缩传输数据的数据量,提出了一种改进型自适应Huffman编码算法,适用于计算资源受限的无线传感网络节点。选择修剪树自适应Huffman编码算法中提供的来自Porcupines的两组测试数据作为实验数据。在TinyOS提供的TOSSIM上对上述数据进行了模拟测试,算法采用C++语言编程实现。结果显示:与修剪树自适应Huffman编码算法相比较,两者的内存资源使用量相等,但该文算法对两组数据的压缩比分别提高了8%和12%。%To reduce the transmission data,a modified adaptive Huffman coding algorithm is proposed for the wireless sensor network(WSN)nodes with poor computational resources. Two groups of test data of Porcupines of tailoring adaptive Huffman coding algorithm are selected as the experimental data. Simulation tests of the two groups of data are proposed by using TOSSIM provided by TinyOS, and the algorithm is realized by using C++. The results show:compared with the tailoring adaptive Huffman coding algorithm,both have the same amount of memory usage,but the compression ratios of the two groups of data of the algorithm proposed here are increased by 8% and 12% respectively.

  10. Implementation of Huffman Decoder on Fpga

    Directory of Open Access Journals (Sweden)

    Safia Amir Dahri

    2016-01-01

    Full Text Available Lossless data compression algorithm is most widely used algorithm in data transmission, reception and storage systems in order to increase data rate, speed and save lots of space on storage devices. Now-a-days, different algorithms are implemented in hardware to achieve benefits of hardware realizations. Hardware implementation of algorithms, digital signal processing algorithms and filter realization is done on programmable devices i.e. FPGA. In lossless data compression algorithms, Huffman algorithm is most widely used because of its variable length coding features and many other benefits. Huffman algorithms are used in many applications in software form, e.g. Zip and Unzip, communication, etc. In this paper, Huffman algorithm is implemented on Xilinx Spartan 3E board. This FPGA is programmed by Xilinx tool, Xilinx ISE 8.2i. The program is written in VHDL and text data is decoded by a Huffman algorithm on Hardware board which was previously encoded by Huffman algorithm. In order to visualize the output clearly in waveforms, the same code is simulated on ModelSim v6.4. Huffman decoder is also implemented in the MATLAB for verification of operation. The FPGA is a configurable device which is more efficient in all aspects. Text application, image processing, video streaming and in many other applications Huffman algorithms are implemented.

  11. 哈夫曼算法及其应用研究%Research and Application of Huffman Algorithm

    Institute of Scientific and Technical Information of China (English)

    张荣梅

    2013-01-01

    The Huffman algorithm firstly is analyzed in this paper. Then, a implementation method of the Huffman algorithm is giv?en. Next, the applications of the Huffman algorithm on compression coding, decision tree and optimal merge tree are discussed.%该文首先分析了赫夫曼算法,给出了一种赫夫曼算法的实现方法,然后研究了赫夫曼算法在压缩编码,判定树,在外部文件排序中的最佳归并树等中的应用.

  12. Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression Standards

    CERN Document Server

    Shahbahrami, Asadollah; Rostami, Mobin Sabbaghi; Mobarhan, Mostafa Ayoubi

    2011-01-01

    Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data. The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than A...

  13. Study of Run-length and Huffman Coding in Binary Image and Implementation in Matlab%二值图像游程-Huffman编码方法研究及Matlab实现

    Institute of Scientific and Technical Information of China (English)

    魏佳圆; 温媛媛; 周诠

    2015-01-01

    In this paper,a lossless compression algorithm in binary image based on run-length and Huffman coding is proposed. Different image is tested by the algorithm,and the experiment results indicate that this method can perform well at images of clearer block and less texture. Moreover,the algorithm is easy to accomplish and has practical value in binary im-age applications.%文章提出一种游程编码和Huffman编码相结合的二值图像无损压缩算法,并对算法进行不同图像的仿真实验,实验结果证明本算法对分块清晰,纹理较少的二值图像压缩效果明显,算法实现简单,具有一定的实用价值。

  14. Some possible codes for encrypting data in DNA.

    Science.gov (United States)

    Smith, Geoff C; Fiddes, Ceridwyn C; Hawkins, Jonathan P; Cox, Jonathan P L

    2003-07-01

    Three codes are reported for storing written information in DNA. We refer to these codes as the Huffman code, the comma code and the alternating code. The Huffman code was devised using Huffman's algorithm for constructing economical codes. The comma code uses a single base to punctuate the message, creating an automatic reading frame and DNA which is obviously artificial. The alternating code comprises an alternating sequence of purines and pyrimidines, again creating DNA that is clearly artificial. The Huffman code would be useful for routine, short-term storage purposes, supposing--not unrealistically--that very fast methods for assembling and sequencing large pieces of DNA can be developed. The other two codes would be better suited to archiving data over long periods of time (hundreds to thousands of years).

  15. 广义哈夫曼树及其在汉字编码中的应用%GENERALIZED HUFFMAN TREE AND ITS APPLICATION IN CHINESE CHARACTER cODING

    Institute of Scientific and Technical Information of China (English)

    游洪跃; 汪建武; 陶郁

    2000-01-01

    提出了广义哈夫曼树的概念,证明了有关的定理和结论,构造了广义哈夫曼树的算法,最后在汉字编码方面进行了应用.%Authors present a concept-generalized Huffman tree(GHT) and prove some pertinent theorems. Meanwhile, authors design the GHTs algorithm. In particular, authors give its application in Chinese character coding.

  16. HUFFMAN-BASED GROUP KEY ESTABLISHMENT SCHEME WITH LOCATION-AWARE

    Institute of Scientific and Technical Information of China (English)

    Gu Xiaozhuo; Yang Jianzu; Lan Julong

    2009-01-01

    Time efficiency of key establishment and update is one of the major problems contributory key managements strive to address. To achieve better time efficiency in key establishment, we propose a Location-based Huffman (L-Huffman) scheme. First, users are separated into several small groups to minimize communication cost when they are distributed over large networks. Second, both user's computation difference and message transmission delay are taken into consideration when Huffman coding is employed to forming the optimal key tree. Third, the combined weights in Huffman tree are located in a higher place of the key tree to reduce the variance of the average key generation time and minimize the longest key generation time. Simulations demonstrate that L-Huffman has much better performance in wide area networks and is a little better in local area network than Huffman scheme.

  17. 基于Huffman编码的多媒体加密技术研究%Research of Multimedia Encryption Based on Huffman Codeing

    Institute of Scientific and Technical Information of China (English)

    李莉萍; 吴蒙

    2011-01-01

    随着多媒体信息在移动、手持设备上应用的日益广泛,人们开始研究低复杂度、对硬件要求较小的多媒体加密技术.如今很多音视频文件格式中(如MPEG4、JPEG、MP3等)都用到了Huffman 编码,基于Huffman 编码的低复杂度的多媒体加密技术逐渐进入人们的研究视野.文章首先介绍了最早提出的基于多重Huffman码表的加密技术,并进一步分析其在抵抗唯密文攻击、已知明文攻击、选择明文攻击时的安全性,最后针对其具有的安全问题提出一种改进Huffman加密方案的建议.

  18. MP3 Steganalysis based on Huffman code tabel index%基于Huffman码表索引的MP3Stego隐写分析方法

    Institute of Scientific and Technical Information of China (English)

    陈益如; 王让定; 严迪群

    2012-01-01

    MP3Stego是经典的MP3音频隐写算法之一.通过分析MP3Stego隐写算法对编码器内循环模块的影响,发现哈夫曼码表索引值在隐写前后发生了不同程度的改变.在此基础上,从待检测的MP3音频的解码参数中提取Huffman码表索引值,计算其二阶差分值,将其作为隐写分析的特征,结合SVM支持向量机实现隐写分析.实验结果表明,所提取的特征能够有效地反映MP3Stego算法在不同嵌入速率下的隐写痕迹.%MP3Stego is a typical steganographic algorithm for MP3 audio. By analysing the influence of the MP3Stego made to inner loop of MP3 encoder, it is found that the index values of Huffman table change differently after embedding. In the proposed algorithm, the index values of Huffman table are extracted from the decoder parameters. The second-order difference of the values is calculated as the steganalysis feature and SVM is used to classify the cover and stego MP3 audios. The experimental results show that the proposed algorithm is effective for detecting MP3 Stego.

  19. KODE HUFFMAN UNTUK KOMPRESI PESAN

    Directory of Open Access Journals (Sweden)

    Erna Zuni Astuti

    2013-05-01

    Full Text Available Dalam ilmu komunikasi data, pesan yang dikirim kepada seseorang, seringkali ukurannya terlalu besar, sehingga membutuhkan tempat penyimpanan yang terlalu besar pula. Demikian juga pesan yang terlalu besar, akan membutuhkan waktu pengiriman yang lebih lama bila dibandingkan dengan pesan yang berukuran relatif lebih kecil. Dua masalah tersebut di atas, sebenarnya bisa diatasi dengan pengkodean pesan dengan tujuan agar isi pesan yang sebenarnya besar, bisa dibuat sesingkat mungkin sehingga waktu pengiriman yang mestinya lama bisa dibuat relatif lebih cepat dan tempat penyimpanan yang besar bisa dibuat relatif lebih efisien dibandingkan dengan sebelum dilakukan pengkodean. Dari Hasil uji coba penerapan dan penghitungan kode Huffman, maka dapat disimpulkan antara lain bahwa  dengan menggunakan kode Huffman ternyata dapat mengurangi beban alias dapat mengkompres data lebih dari 50%. Kata Kunci: Kode Huffman, Kompresi Pesan, Komunikasi

  20. Performance Improvement Of Bengali Text Compression Using Transliteration And Huffman Principle

    Directory of Open Access Journals (Sweden)

    Md. Mamun Hossain

    2016-09-01

    Full Text Available In this paper, we propose a new compression technique based on transliteration of Bengali text to English. Compared to Bengali, English is a less symbolic language. Thus transliteration of Bengali text to English reduces the number of characters to be coded. Huffman coding is well known for producing optimal compression. When Huffman principal is applied on transliterated text significant performance improvement is achieved in terms of decoding speed and space requirement compared to Unicode compression

  1. A SORT-ONCE AND DYNAMIC ENCODING (SODE) BASED HUFFMAN CODING ALGORITHM%基于一次排序动态编码的Huffman编码算法

    Institute of Scientific and Technical Information of China (English)

    刘燕清; 龚声蓉

    2009-01-01

    Huffman编码作为一种高效的不等长编码技术正日益广泛地在文本、图像、视频等数据压缩、存储及通信等领域得到应用.为了有效提高时空效率、简化编码思想和操作,首先研究了传统Huffman 编码的算法及具体做法,并针对性地提出了一种基于一次排序动态编码的Huffman编码算法.与传统的 Huffman算法及近年来国内外文献中提出的改进算法相比,该方法从编码思想上将构树简化为线性编码,在空间复杂度相近的情况下,不仅时间复杂度上有明显降低,而且编码步骤和相关操作更简洁,更利于程序的实现和移植.实验结果验证了算法的有效性.

  2. Bit-Based Joint Source-Channel Decoding of Huffman Encoded Markov Multiple Sources

    Directory of Open Access Journals (Sweden)

    Weiwei Xiang

    2010-04-01

    Full Text Available Multimedia transmission over time-varying channels such as wireless channels has recently motivated the research on the joint source-channel technique. In this paper, we present a method for joint source-channel soft decision decoding of Huffman encoded multiple sources. By exploiting the a priori bit probabilities in multiple sources, the decoding performance is greatly improved. Compared with the single source decoding scheme addressed by Marion Jeanne, the proposed technique is more practical in wideband wireless communications. Simulation results show our new method obtains substantial improvements with a minor increasing of complexity. For two sources, the gain in SNR is around 1.5dB by using convolutional codes when symbol-error rate (SER reaches 10-2 and around 2dB by using Turbo codes.

  3. Joint compression and encryption using chaotically mutated Huffman trees

    Science.gov (United States)

    Hermassi, Houcemeddine; Rhouma, Rhouma; Belghith, Safya

    2010-10-01

    This paper introduces a new scheme for joint compression and encryption using the Huffman codec. A basic tree is first generated for a given message and then based on a keystream generated from a chaotic map and depending from the input message, the basic tree is mutated without changing the statistical model. Hence a symbol can be coded by more than one codeword having the same length. The security of the scheme is tested against the known plaintext attack and the brute force attack. Performance analysis including encryption/decryption speed, additional computational complexity and compression ratio are given.

  4. PERFORMANCE COMPARISON OF HUFFMAN AND LEMPEL-ZIV WELCH DATA COMPRESSION FOR WIRELESS SENSOR NODE APPLICATION

    Directory of Open Access Journals (Sweden)

    Asral Bahari Jambek

    2014-01-01

    Full Text Available Wireless Sensor Networks (WSNs are becoming important in today’s technology in helping monitoring our surrounding environment. However, wireless sensor nodes are powered by limited energy supply. To extend the lifetime of the device, energy consumption must be reduced. Data transmission is known to consume the largest amount of energy in a sensor node. Thus, one method to reduce the energy used is by compressing the data before transmitting it. This study analyses the performance of the Huffman and Lempel-Ziv Welch (LZW algorithms when compressing data that are commonly used in WSN. From the experimental results, the Huffman algorithm gives a better performance when compared to the LZW algorithm for this type of data. The Huffman algorithm is able to reduce the data size by 43% on average, which is four times faster than the LZW algorithm.

  5. Hybrid codes: Methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Winske, D. (Los Alamos National Lab., NM (USA)); Omidi, N. (California Univ., San Diego, La Jolla, CA (USA))

    1991-01-01

    In this chapter we discuss hybrid'' algorithms used in the study of low frequency electromagnetic phenomena, where one or more ion species are treated kinetically via standard PIC methods used in particle codes and the electrons are treated as a single charge neutralizing massless fluid. Other types of hybrid models are possible, as discussed in Winske and Quest, but hybrid codes with particle ions and massless fluid electrons have become the most common for simulating space plasma physics phenomena in the last decade, as we discuss in this paper.

  6. Visually Improved Image Compression by Combining EZW Encoding with Texture Modeling using Huffman Encoder

    Directory of Open Access Journals (Sweden)

    Vinay U. Kale

    2010-05-01

    Full Text Available This paper proposes a technique for image compression which uses the Wavelet-based Image/Texture Coding Hybrid (WITCH scheme [1] in combination with Huffman encoder. It implements a hybrid coding approach, while nevertheless preserving the features of progressive and lossless coding. The hybrid scheme was designed to encode the structural image information by Embedded Zerotree Wavelet (EZW encoding algorithm [2] and the stochastic texture in a model-based manner and this encoded data is then compressed using Huffman encoder. The scheme proposed here achieves superior subjective quality while increasing the compression ratio by more than a factor of three or even four. With this technique, it is possible to achieve compression ratios as high as 10 to 12 but with some minor distortions in the encoded image.

  7. Prefix Codes: Equiprobable Words, Unequal Letter Costs

    OpenAIRE

    Golin, Mordecai; Young, Neal E.

    2002-01-01

    Describes a near-linear-time algorithm for a variant of Huffman coding, in which the letters may have non-uniform lengths (as in Morse code), but with the restriction that each word to be encoded has equal probability. [See also ``Huffman Coding with Unequal Letter Costs'' (2002).

  8. 基于Matlab文本文件哈夫曼编解码仿真%Simulation of Huffman codec of text based on Matlab

    Institute of Scientific and Technical Information of China (English)

    王向鸿

    2013-01-01

    根据当前数据压缩技术的现状,论述了Huffman可变长压缩的编解码方法。为了验证Huffman编解码的具体过程和特点,采用Matlab软件编程仿真的方法,将优先队列转成二叉树并建立编码和解码的字典表。对一随机英文文本文件进行了Huffman编解码仿真,得到了各个字母的概率、码字、平均信息量、平均长度、冗余度以及编码解码序列输出,具有明确的压缩特点。%According to the current situationof the data-compression technology,Huffman codec method which can change codon length to compress is described in this paper. In order to validate the course and characteristics of Huffman-encode-de-code,a method of programming simulation based on Matlab was adopted to convert the priority-queue to binary-tree and consti-tute a code-table of encoding and decoding,and conduct the Huffman-encode-decode simulation of a random English text. The output of the probability,codon,entropy,average length,redundancy,encoding sequence and decoding sequence of each let-ter was obtained,which has a definite compression feature.

  9. Rate-adaptive Constellation Shaping for Near-capacity Achieving Turbo Coded BICM

    DEFF Research Database (Denmark)

    Yankov, Metodi Plamenov; Forchhammer, Søren; Larsen, Knud J.;

    2014-01-01

    In this paper the problem of constellation shaping is considered. Mapping functions are designed for a many- to-one signal shaping strategy, combined with a turbo coded Bit-interleaved Coded Modulation (BICM), based on symmetric Huffman codes with binary reflected Gray-like properties. An algorithm...... is derived for finding the Huffman code with such properties for a variety of alphabet sizes, and near-capacity performance is achieved for a wide SNR region by dynamically choosing the optimal code rate, constellation size and mapping function based on the operating SNR point and assuming perfect channel...... quality estimation. Gains of more than 1dB are observed for high SNR compared to conventional turbo coded BICM, and it is shown that the mapping functions designed here significantly outperform current state of the art Turbo- Trellis Coded Modulation and other existing constellation shaping methods...

  10. 基于哈夫曼(Huffman)算法的探讨和改进

    Institute of Scientific and Technical Information of China (English)

    毕智超

    2011-01-01

    最优二叉树是一种十分重要的数据结构,首先针对最优二叉树--哈夫曼(Huffman)树进行探讨分析并给出算法描述,然后通过快速排序算法将带排序的数据进行排序处理,使哈夫曼算法的时间复杂度降低.最后基于哈夫曼树在编码问题中的应用--哈夫曼编码(Huffman Code),通过简要的说明对哈夫曼编码的存储结构进行了改进.

  11. A novel technique for image steganography based on Block-DCT and Huffman Encoding

    Directory of Open Access Journals (Sweden)

    A.Nag

    2010-06-01

    Full Text Available Image steganography is the art of hiding information into a cover image. This paper presents anovel technique for Image steganography based on Block-DCT, where DCT is used to transform originalimage (cover image blocks from spatial domain to frequency domain. Firstly a gray level image of size M× N is divided into no joint 8 × 8 blocks and a two dimensional Discrete Cosine Transform(2-d DCT isperformed on each of the P = MN / 64 blocks. Then Huffman encoding is also performed on the secretmessages/images before embedding and each bit of Huffman code of secret message/image is embedded inthe frequency domain by altering the least significant bit of each of the DCT coefficients of cover imageblocks. The experimental results show that the algorithm has a high capacity and a good invisibility.Moreover PSNR of cover image with stego-image shows the better results in comparison with otherexisting steganography approaches. Furthermore, satisfactory security is maintained since the secretmessage/image cannot be extracted without knowing decoding rules and Huffman table.

  12. Research on Differential Coding Method for Satellite Remote Sensing Data Compression

    Science.gov (United States)

    Lin, Z. J.; Yao, N.; Deng, B.; Wang, C. Z.; Wang, J. H.

    2012-07-01

    Data compression, in the process of Satellite Earth data transmission, is of great concern to improve the efficiency of data transmission. Information amounts inherent to remote sensing images provide a foundation for data compression in terms of information theory. In particular, distinct degrees of uncertainty inherent to distinct land covers result in the different information amounts. This paper first proposes a lossless differential encoding method to improve compression rates. Then a district forecast differential encoding method is proposed to further improve the compression rates. Considering the stereo measurements in modern photogrammetry are basically accomplished by means of automatic stereo image matching, an edge protection operator is finally utilized to appropriately filter out high frequency noises which could help magnify the signals and further improve the compression rates. The three steps were applied to a Landsat TM multispectral image and a set of SPOT-5 panchromatic images of four typical land cover types (i.e., urban areas, farm lands, mountain areas and water bodies). Results revealed that the average code lengths obtained by the differential encoding method, compared with Huffman encoding, were more close to the information amounts inherent to remote sensing images. And the compression rates were improved to some extent. Furthermore, the compression rates of the four land cover images obtained by the district forecast differential encoding method were nearly doubled. As for the images with the edge features preserved, the compression rates are average four times as large as those of the original images.

  13. Design and implementation for static Huffman encoding hardware with parallel shifting algorithm

    CERN Document Server

    Tae Yeon Lee

    2004-01-01

    This paper presents an implementation of static Huffman encoding hardware for real-time lossless compression in the ECAL of the CMS detector. The construction of the Huffman encoding hardware shows an implementation for optimizing its logic size. The number of logic gates of the parallel shift operation for the hardware is analyzed. Two kinds of implementation methods of the parallel shift operation are compared in aspect of logic size. The experiment with the hardware on a simulated ECAL environment covering 99.9999% of original distribution shows promising result with the simulation that the compression rate was 4.0039 and the maximum length of the stored data in the input buffer was 44. (14 refs).

  14. Report on HOM experimental methods and code

    CERN Document Server

    Shinton, I R R; Flisgen, T

    2013-01-01

    Experimental methods and various codes used are reported on with the aim to understand the signals picked up from the higher order modes in the third harmonic cavities within the ACC39 module at FLASH. Both commercial computer codes have been used, and also codes written for the express purpose of understanding the sensitivity of the modal profiles to geometrical errors and other sources of experimental errors.

  15. A Compression & Encryption Algorithm on DNA Sequences Using Dynamic Look up Table and Modified Huffman Techniques

    Directory of Open Access Journals (Sweden)

    Syed Mahamud Hossein

    2013-09-01

    Full Text Available Storing, transmitting and security of DNA sequences are well known research challenge. The problem has got magnified with increasing discovery and availability of DNA sequences. We have represent DNA sequence compression algorithm based on Dynamic Look Up Table (DLUT and modified Huffman technique. DLUT consists of 43(64 bases that are 64 sub-stings, each sub-string is of 3 bases long. Each sub-string are individually coded by single ASCII code from 33(! to 96(` and vice versa. Encode depends on encryption key choose by user from four base pair {a,t.g and c}and decode also require decryption key provide by the encoded user. Decoding must require authenticate input for encode the data. The sub-strings are combined into a Dynamic Look up Table based pre-coding routine. This algorithm is tested on reverse; complement & reverse complement the DNA sequences and also test on artificial DNA sequences of equivalent length. Speed of encryption and security levels are two important measurements for evaluating any encryption system. Due to proliferate of ubiquitous computing system, where digital contents are accessible through resource constraint biological database security concern is very important issue. A lot of research has been made to find an encryption system which can be run effectively in those biological databases. Information security is the most challenging question to protect the data from unauthorized user. The proposed method may protect the data from hackers. It can provide the three tier security, in tier one is ASCII code, in tier two is nucleotide (a,t,g and c choice by user and tier three is change of label or change of node position in Huffman Tree. Compression of the genome sequences will help to increase the efficiency of their use. The greatest advantage of this algorithm is fast execution, small memory occupation and easy implementation. Since the program to implement the technique have been written originally in the C language

  16. Coding Long Contour Shapes of Binary Objects

    Science.gov (United States)

    Sánchez-Cruz, Hermilo; Rodríguez-Díaz, Mario A.

    This is an extension of the paper appeared in [15]. This time, we compare four methods: Arithmetic coding applied to 3OT chain code (Arith-3OT), Arithmetic coding applied to DFCCE (Arith-DFCCE), Huffman coding applied to DFCCE chain code (Huff-DFCCE), and, to measure the efficiency of the chain codes, we propose to compare the methods with JBIG, which constitutes an international standard. In the aim to look for a suitable and better representation of contour shapes, our probes suggest that a sound method to represent contour shapes is 3OT, because Arithmetic coding applied to it gives the best results regarding JBIG, independently of the perimeter of the contour shapes.

  17. 基于概率补偿的无哈夫曼树变长压缩编码%Variable-length Compressed Encoding Without Huffman Tree Based on the Probability Compensation

    Institute of Scientific and Technical Information of China (English)

    杨多星; 刘蕴红

    2011-01-01

    现在广泛使用的压缩编码方法都要通过哈夫曼树来实现,这样围绕着哈夫曼树就存在着许多运算过程.为了化简编码过程,提出了一种无需哈夫曼树就能实现的变长最佳编码方法,通过一个概率补偿的过程,可以直接得到所有信源的最佳码长.知道码长和概率后也无需通过哈夫曼树就可以确定最后的编码,并且可以证明结果满足变长最佳编码定理和前缀编码.经测试,该方法可以快速有效得到变长最佳编码,并简化了变长编码的运算存储过程.%Nowadays most of the widely used compressions encoding methods are implemented by using the Huffman tree. There are many operational processes around the Huffrnan tree. One variable-length optimal encoding method is proposed to simplify the encoding process. The optimal code length can be get from the probability compensation process. The final coding can be determined without the Huffman tree after knowing the code length and probability, and the result meets the challenge of the variable-length optimal coding theorem and the prefix code. After testing, the method is proved to be able to get the variable-length optimal coding quickly and effectively. The calculation and stored procedures are simplified.

  18. A Multi-coordinate Linkage Interpolation Method with Huffman Code Tree%用Huffman树实现的多坐标联动插补算法

    Institute of Scientific and Technical Information of China (English)

    李志勇; 赵万生; 张勇

    2003-01-01

    将多轴联动插补指令的各坐标相对移动值作为树中节点的权值,用Huffman算法建立插补树,每次插补计算时使用逐点比较法搜索一遍插补树.基于动态Huffman编码树的坐标分组是最优的,在插补运算中具有最快的速度.以联动轴数作为输入考察插补速度,算法时间复杂度是对数阶的.该算法用于电火花机床加工航空和火箭发动机带叶冠整体涡轮叶片.

  19. Fractal methods in image analysis and coding

    OpenAIRE

    Neary, David

    2001-01-01

    In this thesis we present an overview of image processing techniques which use fractal methods in some way. We show how these fields relate to each other, and examine various aspects of fractal methods in each area. The three principal fields of image processing and analysis th a t we examine are texture classification, image segmentation and image coding. In the area of texture classification, we examine fractal dimension estimators, comparing these methods to other methods in use, a...

  20. A novel technique for image steganography based on Block-DCT and Huffman Encoding

    CERN Document Server

    Nag, A; Sarkar, D; Sarkar, P P; 10.5121/ijcsit.2010.2308

    2010-01-01

    Image steganography is the art of hiding information into a cover image. This paper presents a novel technique for Image steganography based on Block-DCT, where DCT is used to transform original image (cover image) blocks from spatial domain to frequency domain. Firstly a gray level image of size M x N is divided into no joint 8 x 8 blocks and a two dimensional Discrete Cosine Transform (2-d DCT) is performed on each of the P = MN / 64 blocks. Then Huffman encoding is also performed on the secret messages/images before embedding and each bit of Huffman code of secret message/image is embedded in the frequency domain by altering the least significant bit of each of the DCT coefficients of cover image blocks. The experimental results show that the algorithm has a high capacity and a good invisibility. Moreover PSNR of cover image with stego-image shows the better results in comparison with other existing steganography approaches. Furthermore, satisfactory security is maintained since the secret message/image ca...

  1. Numerical method improvement for a subchannel code

    Energy Technology Data Exchange (ETDEWEB)

    Ding, W.J.; Gou, J.L.; Shan, J.Q. [Xi' an Jiaotong Univ., Shaanxi (China). School of Nuclear Science and Technology

    2016-07-15

    Previous studies showed that the subchannel codes need most CPU time to solve the matrix formed by the conservation equations. Traditional matrix solving method such as Gaussian elimination method and Gaussian-Seidel iteration method cannot meet the requirement of the computational efficiency. Therefore, a new algorithm for solving the block penta-diagonal matrix is designed based on Stone's incomplete LU (ILU) decomposition method. In the new algorithm, the original block penta-diagonal matrix will be decomposed into a block upper triangular matrix and a lower block triangular matrix as well as a nonzero small matrix. After that, the LU algorithm is applied to solve the matrix until the convergence. In order to compare the computational efficiency, the new designed algorithm is applied to the ATHAS code in this paper. The calculation results show that more than 80 % of the total CPU time can be saved with the new designed ILU algorithm for a 324-channel PWR assembly problem, compared with the original ATHAS code.

  2. 基于四叉树的嵌入式平台Huffman解码优化%Embedded Platform Huffman Optimization Decoding Algorithm Base on Quad-Tree

    Institute of Scientific and Technical Information of China (English)

    鲁云飞; 何明华

    2012-01-01

    考虑到嵌入式设备资源的有限性,提出一种基于四叉树的Huffman解码优化算法.解码过程中,先将Huffman码表表示成四叉树结构,据此重建为一维数组,并充分利用数值计算代替判断与跳转操作.为测试本算法解码性能,将其应用于嵌入式MP3实时解码中,结果表明本算法内存损耗小,解码速率快,算法复杂度低,相比于其他优化算法,更适合应用于嵌入式设备中.%Considering the limitation of embedded system resources, a Huffman decoding optimization algorithm based on the quad tree is proposed in this paper. In this process, the Huffman code table is expressed as quad tree structure at first, and according to which a one-dimensional array is reconstructed, then make full use of numerical calculation instead of judgment and jump operation. In order to test the decoding performance, the method is applied to the embedded'realtime MP3 decoding. The results show that the algorithm memory loss is small, decoding speed is rapid and its complexity is low, compared to other optimization algorithms, this algorithm is more suitable for application in embedded devices.

  3. Finding maximum JPEG image block code size

    Science.gov (United States)

    Lakhani, Gopal

    2012-07-01

    We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.

  4. A FINE GRANULAR JOINT SOURCE CHANNEL CODING METHOD

    Institute of Scientific and Technical Information of China (English)

    Zhuo Li; Shen Lansun; Zhu Qing

    2003-01-01

    An improved FGS (Fine Granular Scalability) coding method is proposed in this letter, which is based on human visual characteristics. This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images. Finally, a fine granular joint source channel coding is proposed based on the source coding method, which not only utilizes the network resources efficiently, but guarantees the reliable transmission of video information.

  5. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    Science.gov (United States)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  6. Does an Arithmetic Coding Followed by Run-length Coding Enhance the Compression Ratio?

    Directory of Open Access Journals (Sweden)

    Mohammed Otair

    2015-07-01

    Full Text Available Compression is a technique to minimize the quantity of image without excessively decreasing the quality of the image. Then, the translating of compressed image is much more efficient and rapidly than original image. Arithmetic and Huffman coding are mostly used techniques in the entropy coding. This study tries to prove that RLC may be added after Arithmetic coding as an extra processing step which may therefore be coded efficiently without any further degradation of the image quality. So, the main purpose of this study is to answer the following question "Which entropy coding, arithmetic with RLC or Huffman with RLC, is more suitable from the compression ratio perspective?" Finally, experimental results show that an Arithmetic followed by RLC coding yields better compression performance than Huffman with RLC coding.

  7. Transference & Retrieval of Pulse-code modulation Audio over Short Messaging Service

    CERN Document Server

    Khan, Muhammad Fahad

    2012-01-01

    The paper presents the method of transferring PCM (Pulse-Code Modulation) based audio messages through SMS (Short Message Service) over GSM (Global System for Mobile Communications) network. As SMS is text based service, and could not send voice. Our method enables voice transferring through SMS, by converting PCM audio into characters. Than Huffman coding compression technique is applied in order to reduce numbers of characters which will latterly set as payload text of SMS. Testing the said method we develop an application using J2me platform

  8. Enhanced motion coding in MC-EZBC

    Science.gov (United States)

    Chen, Junhua; Zhang, Wenjun; Wang, Yingkun

    2005-07-01

    Since hierarchical variable size block matching and bidirectional motion compensation are used in the motioncompensated embedded zero block coding (MC-EZBC), the motion information consists of motion vector quadtree map and motion vectors. In the conventional motion coding scheme, the quadtree structure is coded directly, the motion vector modes are coded with Huffman codes, and the motion vector differences are coded by an m-ary arithmetic coder with 0-order models. In this paper we propose a new motion coding scheme which uses an extension of the CABAC algorithm and new context modeling for quadtree structure coding and mode coding. In addition, we use a new scalable motion coding method which scales the motion vector quadtrees according to the rate-distortion slope of the tree nodes. Experimental results show that the new coding scheme increases the efficiency of the motion coding by more than 25%. The performance of the system is improved accordingly, especially in low bit rates. Moreover, with the scalable motion coding, the subjective and objective coding performance is further enhanced in low bit rate scenarios.

  9. Three Methods for Occupation Coding Based on Statistical Learning

    Directory of Open Access Journals (Sweden)

    Gweon Hyukjun

    2017-03-01

    Full Text Available Occupation coding, an important task in official statistics, refers to coding a respondent’s text answer into one of many hundreds of occupation codes. To date, occupation coding is still at least partially conducted manually, at great expense. We propose three methods for automatic coding: combining separate models for the detailed occupation codes and for aggregate occupation codes, a hybrid method that combines a duplicate-based approach with a statistical learning algorithm, and a modified nearest neighbor approach. Using data from the German General Social Survey (ALLBUS, we show that the proposed methods improve on both the coding accuracy of the underlying statistical learning algorithm and the coding accuracy of duplicates where duplicates exist. Further, we find defining duplicates based on ngram variables (a concept from text mining is preferable to one based on exact string matches.

  10. 无损图像压缩编码方法及其比较%A Study on Ways of Lossless Image Compression and Coding and Relevant Comparisons

    Institute of Scientific and Technical Information of China (English)

    冉晓娟

    2014-01-01

    This essay studies the principles of three ways of lossless image compression including run length coding, LZW coding and Huffman coding as well as making comparative analyses of them,which contributes to the applica-tions of various coding methods in lossless image compression.%研究游程编码,LZW编码和哈夫曼编码三种无损图像压缩的原理,并对其进行分析,这有助于针对不同类型的图像选择合适的压缩编码方法。

  11. A Subband Coding Method for HDTV

    Science.gov (United States)

    Chung, Wilson; Kossentini, Faouzi; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new HDTV coder based on motion compensation, subband coding, and high order conditional entropy coding. The proposed coder exploits the temporal and spatial statistical dependencies inherent in the HDTV signal by using intra- and inter-subband conditioning for coding both the motion coordinates and the residual signal. The new framework provides an easy way to control the system complexity and performance, and inherently supports multiresolution transmission. Experimental results show that the coder outperforms MPEG-2, while still maintaining relatively low complexity.

  12. A Fast Fractal Image Compression Coding Method

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Fast algorithms for reducing encoding complexity of fractal image coding have recently been an important research topic. Search of the best matched domain block is the most computation intensive part of the fractal encoding process. In this paper, a fast fractal approximation coding scheme implemented on a personal computer based on matching in range block's neighbours is presented. Experimental results show that the proposed algorithm is very simple in implementation, fast in encoding time and high in compression ratio while PSNR is almost the same as compared with Barnsley's fractal block coding .

  13. A Rate-Distortion Optimized Coding Method for Region of Interest in Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    Hongtao Wang

    2015-01-01

    original ones is also considered during rate-distortion optimization so that a reasonable trade-off between coding efficiency and decoding drift can be made. Besides, a new Lagrange multiplier derivation method is developed for further coding performance improvement. Experimental results demonstrate that the proposed method achieves significant bitrate saving compared to existing methods.

  14. Code Verification by the Method of Manufactured Solutions

    Energy Technology Data Exchange (ETDEWEB)

    SALARI,KAMBIZ; KNUPP,PATRICK

    2000-06-01

    A procedure for code Verification by the Method of Manufactured Solutions (MMS) is presented. Although the procedure requires a certain amount of creativity and skill, we show that MMS can be applied to a variety of engineering codes which numerically solve partial differential equations. This is illustrated by detailed examples from computational fluid dynamics. The strength of the MMS procedure is that it can identify any coding mistake that affects the order-of-accuracy of the numerical method. A set of examples which use a blind-test protocol demonstrates the kinds of coding mistakes that can (and cannot) be exposed via the MMS code Verification procedure. The principle advantage of the MMS procedure over traditional methods of code Verification is that code capabilities are tested in full generality. The procedure thus results in a high degree of confidence that all coding mistakes which prevent the equations from being solved correctly have been identified.

  15. Totally Coded Method for Signal Flow Graph Algorithm

    Institute of Scientific and Technical Information of China (English)

    XU Jing-bo; ZHOU Mei-hua

    2002-01-01

    After a code-table has been established by means of node association information from signal flow graph, the totally coded method (TCM) is applied merely in the domain of code operation beyond any figure-earching algorithm. The code-series (CS) have the holoinformation nature, so that both the content and the sign of each gain- term can be determined via the coded method. The principle of this method is simple and it is suited for computer programming. The capability of the computer-aided analysis for switched current network(SIN) can be enhanced.

  16. A FINE GRANULAR JOINT SOURCE CHANNEL CODING METHOD

    Institute of Scientific and Technical Information of China (English)

    ZhuoLi; ShenLanusun

    2003-01-01

    An improved FGS (Fine Granular Scalability) coding method is proposed in this letter,which is based on human visual characteristics.This method adjusts FGS coding frame rate according to the evaluation of video sequences so as to improve the coding efficiency and subject perceived quality of reconstructed images.Finally,a fine granular joint source channel coding is proposed based on the source coding method,which not only utilizes the network resources efficiently,but guarantees the reliable transmission of video information.

  17. Encoding of multi-alphabet sources by binary arithmetic coding

    Science.gov (United States)

    Guo, Muling; Oka, Takahumi; Kato, Shigeo; Kajiwara, Hiroshi; Kawamura, Naoto

    1998-12-01

    In case of encoding a multi-alphabet source, the multi- alphabet symbol sequence can be encoded directly by a multi- alphabet arithmetic encoder, or the sequence can be first converted into several binary sequences and then each binary sequence is encoded by binary arithmetic encoder, such as the L-R arithmetic coder. Arithmetic coding, however, requires arithmetic operations for each symbol and is computationally heavy. In this paper, a binary representation method using Huffman tree is introduced to reduce the number of arithmetic operations, and a new probability approximation for L-R arithmetic coding is further proposed to improve the coding efficiency when the probability of LPS (Least Probable Symbol) is near 0.5. Simulation results show that our proposed scheme has high coding efficacy and can reduce the number of coding symbols.

  18. Permutation Matrix Method for Dense Coding Using GHZ States

    Institute of Scientific and Technical Information of China (English)

    JIN Rui-Bo; CHEN Li-Bing; WANG Fa-Qiang; SU Zhi-Kun

    2008-01-01

    We present a new method called the permutation matrix method to perform dense coding using Greenberger-Horne-Zeilinger (GHZ) states. We show that this method makes the study of dense coding systematically and regularly. It also has high potential to be realized physically.

  19. New Methods for Lossless Image Compression Using Arithmetic Coding.

    Science.gov (United States)

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  20. Compressed Method of Traditional Vision Signal Combining DPCM and Self-adapting Huffman%DPCM与自适应Huffman结合的雷达原始视频信号压缩算法

    Institute of Scientific and Technical Information of China (English)

    李灵芝; 江晶

    2006-01-01

    为了解决大容量雷达数据传输,满足雷达原始视频信号实时无损的要求,根据雷达原始视频信号的特点,给出了采用DPCM(Difference Pulse Coding Modulation)与自适应Huffman编码相结合的压缩编码方式,分析了该算法的有效性和溢出问题,实验表明该方法相对于传统的自适应Huffman编码而言能改善实时性,提高了压缩比.

  1. Calibration Methods for Reliability-Based Design Codes

    DEFF Research Database (Denmark)

    Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard

    2004-01-01

    The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...

  2. Lattice Boltzmann method fundamentals and engineering applications with computer codes

    CERN Document Server

    Mohamad, A A

    2014-01-01

    Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.

  3. A novel method of generating and remembering international morse codes

    Digital Repository Service at National Institute of Oceanography (India)

    Charyulu, R.J.K.

    A novel method of generating and remembering International Morse Code is presented in this paper The method requires only memorizing 4 key sentences and requires knowledge of writing binary equivalents of decimal numerals 1 to 16 However much...

  4. Conditional entropy coding of DCT coefficients for video compression

    Science.gov (United States)

    Sipitca, Mihai; Gillman, David W.

    2000-04-01

    We introduce conditional Huffman encoding of DCT run-length events to improve the coding efficiency of low- and medium-bit rate video compression algorithms. We condition the Huffman code for each run-length event on a classification of the current block. We classify blocks according to coding mode and signal type, which are known to the decoder, and according to energy, which the decoder must receive as side information. Our classification schemes improve coding efficiency with little or no increased running time and some increased memory use.

  5. An Overview of the Monte Carlo Methods, Codes, & Applications Group

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-30

    This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.

  6. A New Video Coding Method Based on Improving Detail Regions

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The Moving Pictures Expert Group (MPEG) and H.263 standard coding method is widely used in video compression. However, the visual quality of detail regions such as eyes and mouth is not content in people at the decoder, as far as the conference telephone or videophone is concerned. A new coding method based on improving detail regions is presented in this paper. Experimental results show that this method can improve the visual quality at the decoder.

  7. PhyloCSF: a comparative genomics method to distinguish protein coding and non-coding regions.

    Science.gov (United States)

    Lin, Michael F; Jungreis, Irwin; Kellis, Manolis

    2011-07-01

    As high-throughput transcriptome sequencing provides evidence for novel transcripts in many species, there is a renewed need for accurate methods to classify small genomic regions as protein coding or non-coding. We present PhyloCSF, a novel comparative genomics method that analyzes a multispecies nucleotide sequence alignment to determine whether it is likely to represent a conserved protein-coding region, based on a formal statistical comparison of phylogenetic codon models. We show that PhyloCSF's classification performance in 12-species Drosophila genome alignments exceeds all other methods we compared in a previous study. We anticipate that this method will be widely applicable as the transcriptomes of many additional species, tissues and subcellular compartments are sequenced, particularly in the context of ENCODE and modENCODE, and as interest grows in long non-coding RNAs, often initially recognized by their lack of protein coding potential rather than conserved RNA secondary structures. The Objective Caml source code and executables for GNU/Linux and Mac OS X are freely available at http://compbio.mit.edu/PhyloCSF CONTACT: mlin@mit.edu; manoli@mit.edu.

  8. Lossless Compression Method for Medical Image Sequences Using Super-Spatial Structure Prediction and Inter-frame Coding

    Directory of Open Access Journals (Sweden)

    Mudassar Raza

    2012-08-01

    Full Text Available Space research organizations, hospitals and military air surveillance activities, among others, produce a huge amountof data in the form of images hence a large storage space is required to record this information. In hospitals, dataproduced during medical examination is in the form of a sequence of images and are very much correlated; becausethese images have great importance, some kind of lossless image compression technique is needed. Moreover, theseimages are often required to be transmitted over the network. Since the availability of storage and bandwidth islimited, a compression technique is required to reduce the number of bits to store these images and take less time totransmit them over the network. For this purpose, there are many state-of the-art lossless image compressionalgorithms like CALIC, LOCO-I, JPEG-LS, JPEG20000; Nevertheless, these compression algorithms take only asingle file to compress and cannot exploit the correlation among the sequence frames of MRI or CE images. Toexploit the correlation, a new algorithm is proposed in this paper. The primary goals of the proposed compressionmethod are to minimize the memory resource during storage of compressed data as well as minimize the bandwidthrequirement during transmission of compressed data. For achieving these goals, the proposed compression methodcombines the single image compression technique called super spatial structure prediction with inter-frame coding toacquire grater compression ratio. An efficient compression method requires elimination of redundancy of data duringcompression; therefore, for elimination of redundancy of data, initially, the super spatial structure prediction algorithmis applied with the fast block matching approach and later Huffman coding is applied for reducing the number of bitsrequired for transmitting and storing single pixel value. Also, to speed up the block-matching process during motionestimation, the proposed method compares those blocks

  9. A modified phase-coding method for absolute phase retrieval

    Science.gov (United States)

    Xing, Y.; Quan, C.; Tay, C. J.

    2016-12-01

    Fringe projection technique is one of the most robust tools for three dimensional (3D) shape measurement. Various fringe projection methods have been proposed for addressing different issues in profilometry and phase-coding is one such technique employed to determine fringe orders for absolute phase retrieval. However this method is prone to fringe order error, while dealing with high-frequency fringes. This paper studies phase error introduced by system non-linearity in phase-coding and provides a mathematical model to obtain the maximum number of achievable codewords in a given scheme. In addition, a modified phase-coding method is also proposed for phase error compensation. Experimental study validates the theoretical analysis on the maximum number of achievable codewords and the performance of the modified phase-coding method is also illustrated.

  10. Direct GPS P-Code Acquisition Method Based on FFT

    Institute of Scientific and Technical Information of China (English)

    LI Hong; LU Mingquan; FENG Zhenming

    2008-01-01

    Recently, direct acquisition of GPS P-code has received considerable attention to enhance the anti-jamming and anti-spoofing capabilities of GPS receivers. This paper describes a P-code acquisition method that uses block searches with large-scale FFT to search code phases and carrier frequency offsets in parallel. To limit memory use, especially when implemented in hardware, only the largest correlation result with its position information was preserved after searching a block of resolution cells in both the time and frequency domains. A second search was used to solve the code phase slip problem induced by the code frequency offset. Simulation results demonstrate that the probability of detection is above 0.99 for carrier-to-noise density ratios in excess of 40 dB- Hz when the predetection integration time is 0.8 ms and 6 non-coherent integrations are used in the analysis.

  11. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    Science.gov (United States)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  12. A GPU code for analytic continuation through a sampling method

    Science.gov (United States)

    Nordström, Johan; Schött, Johan; Locht, Inka L. M.; Di Marco, Igor

    We here present a code for performing analytic continuation of fermionic Green's functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU). The code is based on the sampling method introduced by Mishchenko et al. (2000), and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.

  13. Response to the Critique of the Huffman (2014) Article, "Reading Rate Gains during a One-Semester Extensive Reading Course"

    Science.gov (United States)

    Huffman, Jeffrey

    2016-01-01

    In his critique of the Huffman (2014) article, McLean (2016) undertakes an important reflective exercise that is too often missing in the field of second language acquisition and in the social sciences in general: questioning whether the claims made by researchers are warranted by their results. In this article, Jeffrey Huffman says that McLean…

  14. 2D arc-PIC code description: methods and documentation

    CERN Document Server

    Timko, Helga

    2011-01-01

    Vacuum discharges are one of the main limiting factors for future linear collider designs such as that of the Compact LInear Collider. To optimize machine efficiency, maintaining the highest feasible accelerating gradient below a certain breakdown rate is desirable; understanding breakdowns can therefore help us to achieve this goal. As a part of ongoing theoretical research on vacuum discharges at the Helsinki Institute of Physics, the build-up of plasma can be investigated through the particle-in-cell method. For this purpose, we have developed the 2D Arc-PIC code introduced here. We present an exhaustive description of the 2D Arc-PIC code in two parts. In the first part, we introduce the particle-in-cell method in general and detail the techniques used in the code. In the second part, we provide a documentation and derivation of the key equations occurring in the code. The code is original work of the author, written in 2010, and is therefore under the copyright of the author. The development of the code h...

  15. DETERMINISTIC TRANSPORT METHODS AND CODES AT LOS ALAMOS

    Energy Technology Data Exchange (ETDEWEB)

    J. E. MOREL

    1999-06-01

    The purposes of this paper are to: Present a brief history of deterministic transport methods development at Los Alamos National Laboratory from the 1950's to the present; Discuss the current status and capabilities of deterministic transport codes at Los Alamos; and Discuss future transport needs and possible future research directions. Our discussion of methods research necessarily includes only a small fraction of the total research actually done. The works that have been included represent a very subjective choice on the part of the author that was strongly influenced by his personal knowledge and experience. The remainder of this paper is organized in four sections: the first relates to deterministic methods research performed at Los Alamos, the second relates to production codes developed at Los Alamos, the third relates to the current status of transport codes at Los Alamos, and the fourth relates to future research directions at Los Alamos.

  16. Design and implementation of static Huffman encoding hardware using a parallel shifting algorithm

    CERN Document Server

    Tae Yeon Lee

    2004-01-01

    This paper discusses the implementation of static Huffman encoding hardware for real-time lossless compression for the electromagnetic calorimeter in the CMS experiment. The construction of the Huffman encoding hardware illustrates the implementation for optimizing the logic size. The number of logic gates in the parallel shift operation required for the hardware was examined. The experiment with a simulated environment and an FPGA shows that the real-time constraint has been fulfilled and the design of the buffer length is appropriate. (16 refs).

  17. Methods and computer codes for nuclear systems calculations

    Indian Academy of Sciences (India)

    B P Kochurov; A P Knyazev; A Yu Kwaretzkheli

    2007-02-01

    Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.

  18. A Method of Coding and Decoding in Underwater Image Transmission

    Institute of Scientific and Technical Information of China (English)

    程恩

    2001-01-01

    A new method of coding and decoding in the system of underwater image transmission is introduced, including the rapid digital frequency synthesizer in multiple frequency shift keying,image data generator, image grayscale decoder with intelligent fuzzy algorithm, image restoration and display on microcomputer.

  19. P-code enhanced method for processing encrypted GPS signals without knowledge of the encryption code

    Science.gov (United States)

    Meehan, Thomas K. (Inventor); Thomas, Jr., Jess Brooks (Inventor); Young, Lawrence E. (Inventor)

    2000-01-01

    In the preferred embodiment, an encrypted GPS signal is down-converted from RF to baseband to generate two quadrature components for each RF signal (L1 and L2). Separately and independently for each RF signal and each quadrature component, the four down-converted signals are counter-rotated with a respective model phase, correlated with a respective model P code, and then successively summed and dumped over presum intervals substantially coincident with chips of the respective encryption code. Without knowledge of the encryption-code signs, the effect of encryption-code sign flips is then substantially reduced by selected combinations of the resulting presums between associated quadrature components for each RF signal, separately and independently for the L1 and L2 signals. The resulting combined presums are then summed and dumped over longer intervals and further processed to extract amplitude, phase and delay for each RF signal. Precision of the resulting phase and delay values is approximately four times better than that obtained from straight cross-correlation of L1 and L2. This improved method provides the following options: separate and independent tracking of the L1-Y and L2-Y channels; separate and independent measurement of amplitude, phase and delay L1-Y channel; and removal of the half-cycle ambiguity in L1-Y and L2-Y carrier phase.

  20. Research on coding and decoding method for digital levels.

    Science.gov (United States)

    Tu, Li-fen; Zhong, Si-dong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.

  1. A Theoretical Method for Estimating Performance of Reed-Solomon Codes Concatenated with Orthogonal Space-Time Block Codes

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Based on the studies of Reed-Solomon codes and orthogonal space-time block codes over Rayleigh fading channel, a theoretical method for estimating performance of Reed-Solomon codes concatenated with orthogonal space-time block codes is presented in this paper. And an upper bound of the bit error rate is also obtained. It is shown through computer simulations that the signal-to-noise ratio reduces about 15 dB or more after orthogonal space-time block codes are concatenate with Reed-Solomon (15,6) codes over Rayleigh fading channel, when the bit error rate is 10-4.

  2. Source-channel optimized trellis codes for bitonal image transmission over AWGN channels.

    Science.gov (United States)

    Kroll, J M; Phamdo, N

    1999-01-01

    We consider the design of trellis codes for transmission of binary images over additive white Gaussian noise (AWGN) channels. We first model the image as a binary asymmetric Markov source (BAMS) and then design source-channel optimized (SCO) trellis codes for the BAMS and AWGN channel. The SCO codes are shown to be superior to Ungerboeck's codes by approximately 1.1 dB (64-state code, 10(-5) bit error probability), We also show that a simple "mapping conversion" method can be used to improve the performance of Ungerboeck's codes by approximately 0.4 dB (also 64-state code and 10 (-5) bit error probability). We compare the proposed SCO system with a traditional tandem system consisting of a Huffman code, a convolutional code, an interleaver, and an Ungerboeck trellis code. The SCO system significantly outperforms the tandem system. Finally, using a facsimile image, we compare the image quality of an SCO code, an Ungerboeck code, and the tandem code, The SCO code yields the best reconstructed image quality at 4-5 dB channel SNR.

  3. Rate-adaptive Constellation Shaping for Near-capacity Achieving Turbo Coded BICM

    DEFF Research Database (Denmark)

    Yankov, Metodi Plamenov; Forchhammer, Søren; Larsen, Knud J.

    2014-01-01

    In this paper the problem of constellation shaping is considered. Mapping functions are designed for a many- to-one signal shaping strategy, combined with a turbo coded Bit-interleaved Coded Modulation (BICM), based on symmetric Huffman codes with binary reflected Gray-like properties. An algorit...

  4. CNC LATHE MACHINE PRODUCING NC CODE BY USING DIALOG METHOD

    Directory of Open Access Journals (Sweden)

    Yakup TURGUT

    2004-03-01

    Full Text Available In this study, an NC code generation program utilising Dialog Method was developed for turning centres. Initially, CNC lathes turning methods and tool path development techniques were reviewed briefly. By using geometric definition methods, tool path was generated and CNC part program was developed for FANUC control unit. The developed program made CNC part program generation process easy. The program was developed using BASIC 6.0 programming language while the material and cutting tool database were and supported with the help of ACCESS 7.0.

  5. Quantization Skipping Method for H.264/AVC Video Coding

    Institute of Scientific and Technical Information of China (English)

    Won-seon SONG; Min-cheol HONG

    2010-01-01

    This paper presents a quantization skipping method for H.264/AVC video coding standard. In order to reduce the computational-cost of quantization process coming from integer discrete cosine transform of H.264/AVC, a quantization skipping condition is derived by the analysis of integer transform and quantization procedures. The experimental results show that the proposed algorithm has the capability to reduce the computational cost about 10%~25%.

  6. A coded VEP method to measure interhemispheric transfer time (IHTT).

    Science.gov (United States)

    Li, Yun; Bin, Guangyu; Hong, Bo; Gao, Xiaorong

    2010-03-19

    Interhemispheric transfer time (IHTT) is an important parameter for research on the information conduction time across the corpus callosum between the two hemispheres. There are several traditional methods used to estimate the IHTT, including the reaction time (RT) method, the evoked potential (EP) method and the measure based on the transcranial magnetic stimulation (TMS). The present study proposes a novel coded VEP method to estimate the IHTT based on the specific properties of the m-sequence. These properties include good signal-to-noise ratio (SNR) and high noise tolerance. Additionally, calculation of the circular cross-correlation function is sensitive to the phase difference. The method presented in this paper estimates the IHTT using the m-sequence to encode the visual stimulus and also compares the results with the traditional flash VEP method. Furthermore, with the phase difference of the two responses calculated using the circular cross-correlation technique, the coded VEP method could obtain IHTT results, which does not require the selection of the utilized component.

  7. Optimal source codes for geometrically distributed integer alphabets

    Science.gov (United States)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  8. Sparse coding based feature representation method for remote sensing images

    Science.gov (United States)

    Oguslu, Ender

    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further

  9. Reserved-Length Prefix Coding

    CERN Document Server

    Baer, Michael B

    2008-01-01

    Huffman coding finds an optimal prefix code for a given probability mass function. Consider situations in which one wishes to find an optimal code with the restriction that all codewords have lengths that lie in a user-specified set of lengths (or, equivalently, no codewords have lengths that lie in a complementary set). This paper introduces a polynomial-time dynamic programming algorithm that finds optimal codes for this reserved-length prefix coding problem. This has applications to quickly encoding and decoding lossless codes. In addition, one modification of the approach solves any quasiarithmetic prefix coding problem, while another finds optimal codes restricted to the set of codes with g codeword lengths for user-specified g (e.g., g=2).

  10. PIPI: PTM-Invariant Peptide Identification Using Coding Method.

    Science.gov (United States)

    Yu, Fengchao; Li, Ning; Yu, Weichuan

    2016-12-02

    In computational proteomics, the identification of peptides with an unlimited number of post-translational modification (PTM) types is a challenging task. The computational cost associated with database search increases exponentially with respect to the number of modified amino acids and linearly with respect to the number of potential PTM types at each amino acid. The problem becomes intractable very quickly if we want to enumerate all possible PTM patterns. To address this issue, one group of methods named restricted tools (including Mascot, Comet, and MS-GF+) only allow a small number of PTM types in database search process. Alternatively, the other group of methods named unrestricted tools (including MS-Alignment, ProteinProspector, and MODa) avoids enumerating PTM patterns with an alignment-based approach to localizing and characterizing modified amino acids. However, because of the large search space and PTM localization issue, the sensitivity of these unrestricted tools is low. This paper proposes a novel method named PIPI to achieve PTM-invariant peptide identification. PIPI belongs to the category of unrestricted tools. It first codes peptide sequences into Boolean vectors and codes experimental spectra into real-valued vectors. For each coded spectrum, it then searches the coded sequence database to find the top scored peptide sequences as candidates. After that, PIPI uses dynamic programming to localize and characterize modified amino acids in each candidate. We used simulation experiments and real data experiments to evaluate the performance in comparison with restricted tools (i.e., Mascot, Comet, and MS-GF+) and unrestricted tools (i.e., Mascot with error tolerant search, MS-Alignment, ProteinProspector, and MODa). Comparison with restricted tools shows that PIPI has a close sensitivity and running speed. Comparison with unrestricted tools shows that PIPI has the highest sensitivity except for Mascot with error tolerant search and Protein

  11. High performance word level sequential and parallel coding methods and architectures for bit plane coding

    Institute of Scientific and Technical Information of China (English)

    XIONG ChengYi; TIAN JinWen; LIU Jian

    2008-01-01

    This paper introduced a novel high performance algorithm and VLSI architectures for achieving bit plane coding (BPC) in word level sequential and parallel mode. The proposed BPC algorithm adopts the techniques of coding pass prediction and par-allel & pipeline to reduce the number of accessing memory and to increase the ability of concurrently processing of the system, where all the coefficient bits of a code block could be coded by only one scan. A new parallel bit plane architecture (PA) was proposed to achieve word-level sequential coding. Moreover, an efficient high-speed architecture (HA) was presented to achieve multi-word parallel coding. Compared to the state of the art, the proposed PA could reduce the hardware cost more efficiently, though the throughput retains one coefficient coded per clock. While the proposed HA could perform coding for 4 coefficients belonging to a stripe column at one intra-clock cycle, so that coding for an N×N code-block could be completed in approximate N2/4 intra-clock cycles. Theoretical analysis and ex-perimental results demonstrate that the proposed designs have high throughput rate with good performance in terms of speedup to cost, which can be good alter-natives for low power applications.

  12. Huffman和S-DES混合加密算法的研究%Analysis of Huffman and S-DES of Mixed Encryption Algorithm

    Institute of Scientific and Technical Information of China (English)

    郑静; 王腾

    2014-01-01

    In contrast to the existing common encryption software and classical cryptography, combined with the present situa-tion and development of the current text encryption, this paper will be based on dynamic Huffman coding and S-DES algo-rithm, make up for the shortcomings of the two, achieve the best effect on text information encryption.%在对比现有的加密软件和古典密码学常见的加密算法后,结合文本加密的现状及发展趋势,该文将基于动态Huff-man编码和S-DES算法相结合,弥补两者的缺点,达到对文本信息的最佳加密及解密效果。

  13. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  14. A CLASS OF LDPC CODE'S CONSTRUCTION BASED ON AN ITERATIVE RANDOM METHOD

    Institute of Scientific and Technical Information of China (English)

    Huang Zhonghu; Shen Lianfeng

    2006-01-01

    This letter gives a random construction for Low Density Parity Check (LDPC) codes, which uses an iterative algorithm to avoid short cycles in the Tanner graph. The construction method has great flexible choice in LDPC code's parameters including codelength, code rate, the least girth of the graph, the weight of column and row in the parity check matrix. The method can be applied to the irregular LDPC codes and strict regular LDPC codes. Systemic codes have many applications in digital communication, so this letter proposes a construction of the generator matrix of systemic LDPC codes from the parity check matrix. Simulations show that the method performs well with iterative decoding.

  15. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina;

    2014-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...

  16. Coupling of partitioned physics codes with quasi-Newton methods

    CSIR Research Space (South Africa)

    Haelterman, R

    2017-03-01

    Full Text Available Many physics problems can only be studied by coupling various numerical codes, each modeling a subaspect of the physics problem that is addressed. Often, each of these codes needs to be considered as a black box, either because the codes were...

  17. Improved Fast Fourier Transform Based Method for Code Accuracy Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Tae Wook; Jeong, Jae Jun [Pusan National University, Busan (Korea, Republic of); Choi, Ki Yong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The capability of the proposed method is discussed. In this study, the limitations of the FFTBM were analyzed. The FFTBM produces quantitatively different results due to its frequency dependence. Because the problem is intensified by including a lot of high frequency components, a new method using a reduced cut-off frequency was proposed. The results of the proposed method show that the shortcomings of FFTBM are considerably relieved. Among them, the fast Fourier transform based method (FFTBM) introduced in 1990 has been widely used to evaluate a code uncertainty or accuracy. Prosek et al., (2008) identified its drawbacks, the so-called 'edge effect'. To overcome the problems, an improved FFTBM by signal mirroring (FFTBM-SM) was proposed and it has been used up to now. In spite of the improvement, the FFTBM-SM yielded different accuracy depending on the frequency components of a parameter, such as pressure, temperature and mass flow rate. Therefore, it is necessary to reduce the frequency dependence of the FFTBMs. In this study, the deficiencies of the present FFTBMs are analyzed and a new method is proposed to mitigate its frequency dependence.

  18. Proposed Arabic Text Steganography Method Based on New Coding Technique

    Directory of Open Access Journals (Sweden)

    Assist. prof. Dr. Suhad M. Kadhem

    2016-09-01

    Full Text Available Steganography is one of the important fields of information security that depend on hiding secret information in a cover media (video, image, audio, text such that un authorized person fails to realize its existence. One of the lossless data compression techniques which are used for a given file that contains many redundant data is run length encoding (RLE. Sometimes the RLE output will be expanded rather than compressed, and this is the main problem of RLE. In this paper we will use a new coding method such that its output will be contains sequence of ones with few zeros, so modified RLE that we proposed in this paper will be suitable for compression, finally we employ the modified RLE output for stenography purpose that based on Unicode and non-printed characters to hide the secret information in an Arabic text.

  19. Compressing industrial computed tomography images by means of contour coding

    Science.gov (United States)

    Jiang, Haina; Zeng, Li

    2013-10-01

    An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.

  20. Method for Veterbi decoding of large constraint length convolutional codes

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie; Reed, Irving S.; Jing, Sun

    1988-05-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  1. Modified Huffman Code and Its Applications%改进的Huffman编码及其应用

    Institute of Scientific and Technical Information of China (English)

    武善玉; 晏振鸣

    2009-01-01

    该文探讨了JPEG压缩技术,重点针对Htuffman编码中最优二又树的"形态"不唯一问题,提出一种基于"简单原则"的新方法.经过这种方法改进的Huffman编码,使得JPEG中相应的值或字符的Huffman编码是唯一的.与传统的Huffman算法及近年来国内外文献中提出的改进算法相比,该方法编码步骤和相关操作更简洁,因而更利于程序的实现和移植.最后给出一个实例,表明此方法的实用性.

  2. 关于Huffman编码的一个注记%A Note on Huffman Coding

    Institute of Scientific and Technical Information of China (English)

    林嘉宇; 刘荧

    2003-01-01

    Huffman编码是无损压缩中的重要方法,在数据压缩、音频编码、图像编码中得到广泛的应用.除了压缩效率以外,作为变长码的Huffman编码,还有其他的判断其编码优劣的准则,例如码方差、抗误码的能力等.本文讨论Huffman编码后的码流中0、1码元(二进制情况下)出现的概率问题.研究结果表明,通常的经典Huffman编码的0、1码元出现的概率差最大,在出现概率均衡准则下的性能最劣.文章进行了严格的数学建模,并给出了一种算法,可以使编码后码流中0、1码元的分布概率(趋向)均等;并且,算法可在原Huffman编码中结合进行,所增加的计算量很小.文章最后进行了实验验证.

  3. Huffman编码的另类算法%Huffman Code Algorithm Using Other Way

    Institute of Scientific and Technical Information of China (English)

    王敏; 刘洋

    2006-01-01

    本文从Huffman树的"原始"构造及其编码算法出发,分析影响其算法性能的因素,介绍了Canonical Huffman编码.从提高算法性能的角度,利用Canonical Huffman编码规则改进"原始"算法,并提出新的算法及其实例.

  4. 用Perl语言实现Huffman编码%IMPLEMENTATION OF HUFFMAN CODES BY PERL PROGRAMMING

    Institute of Scientific and Technical Information of China (English)

    刘学军

    2006-01-01

    Perl是一种功能强大的编程语言.Huffman编码是压缩文件的一种常用算法.采用Perl语言编程来产生Huffman编码,并阐述了用Perl编写此程序的基本思想及其数据类型的使用技巧.最后根据此程序的输出结果,简要讨论并分析了Huffman算法对文件的压缩率随字符种类及其出现频率的变化规律.

  5. Implement of Huffman code in Matlab%Matlab下实现huffman编码

    Institute of Scientific and Technical Information of China (English)

    吴记群; 李双科

    2006-01-01

    在matlab中模拟C中链表,利用复数运算,联系具体字符和概率,每次找到最小概率的两个字符对应的编号,依次记录下来,最后根据奇偶码的不同实现Huffman编码.本算法新颖独特,易于理解、编程.

  6. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  7. Improved Methods For Generating Quasi-Gray Codes

    CERN Document Server

    Jansens, Dana; Carmi, Paz; Maheshwari, Anil; Morin, Pat; Smid, Michiel

    2010-01-01

    Consider a sequence of bit strings of length d, such that each string differs from the next in a constant number of bits. We call this sequence a quasi-Gray code. We examine the problem of efficiently generating such codes, by considering the number of bits read and written at each generating step, the average number of bits read while generating the entire code, and the number of strings generated in the code. Our results give a trade-off between these constraints, and present algorithms that do less work on average than previous results, and that increase the number of bit strings generated.

  8. 混沌权值变异的Huffman树图像加密算法%An Image Encryption Algorithm Using Chaos-based Weight Variation of Huffman Tree

    Institute of Scientific and Technical Information of China (English)

    龙敏; 谭丽

    2011-01-01

    Using chaos-based weight variation of Huffman tree,an image/video encryption algorithm is proposed in this paper. In the process of the entropy coding,DC coefficients are encrypted by the weight variation of Huffman tree with the double Logistic chaos and AC coefficients are encrypted by the indexes of codeword. The security,complexity and compression ration of the algorithm are analyzed. Simulation results show that this algorithm has no impact on the compression efficiency and has low complexity,high security and good real-time property. Therefore,it is suitable for real-time image on the network.%提出一种采用混沌权值变异的Huff man树的图像加密算法.此算法在熵编码过程中,以基本的Huffman树为标准,利用双耦合混沌序列1对DC系数进行树的结构未变异、路径值变异的加密;再利用双耦合混沌序列2对AC系数进行码字序号的加密.论文对算法进行了仿真,并对安全性、计算复杂度、压缩比性能进行了分析.实验结果表明,该算法基本上不影响压缩效率,且计算复杂度低、安全性高和实时性好,可用于网络上的图像服务.

  9. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.

  10. Modern Method for Detecting Web Phishing Using Visual Cryptography (VC and Quick Response Code (QR code

    Directory of Open Access Journals (Sweden)

    Ms. Ashvini Kute

    2015-01-01

    Full Text Available Phishing is an attempt by an individual or a group to thieve personal confidential information such as passwords, credit card information etc from unsuspecting victims for identity theft, financial gain and other fraudulent activities. Here an image based (QR codes authentication using Visual Cryptography (VC is used. The use of Visual cryptography is explored to convert the QR code into two shares and both these shares can then be transmitted separately. One Time Passwords (OTP is passwords which are valid only for a session to validate the user within a specified amount of time. In this paper we are presenting a new authentication scheme for secure OTP distribution in phishing website detection through VC and QR codes.

  11. Modern Method for Detecting Web Phishing Using Visual Cryptography (VC and Quick Response Code (QR code

    Directory of Open Access Journals (Sweden)

    Ms. Ashvini Kute

    2015-05-01

    Full Text Available Phishing is an attempt by an individual or a group to thieve personal confidential information such as passwords, credit card information etc from unsuspecting victims for identity theft, financial gain and other fraudulent activities. Here an image based (QR codes authentication using Visual Cryptography (VC is used. The use of Visual cryptography is explored to convert the QR code into two shares and both these shares can then be transmitted separately. One Time Passwords (OTP is passwords which are valid only for a session to validate the user within a specified amount of time. In this paper we are presenting a new authentication scheme for secure OTP distribution in phishing website detection through VC and QR codes.

  12. Modification of codes NUALGAM and BREMRAD. Volume 3: Statistical considerations of the Monte Carlo method

    Science.gov (United States)

    Firstenberg, H.

    1971-01-01

    The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.

  13. An Efficient Huffman Coding Algorithm of Non-creating Huffman Tree (NHTC)%一种不用建造Huffman树的高效Huffman编码算法

    Institute of Scientific and Technical Information of China (English)

    李伟生; 李域; 王涛

    2005-01-01

    Huffman编码作为一种高效的不等长编码技术正日益广泛地在文本、图像、视频压缩及通信、密码等领域得到应用.为了更有效地利用内存空间、简化编码步骤和相关操作,首先研究了重建Huffman树所需要的信息,并提出通过对一类一维结构数组进行相关操作来获取上述信息的方法,然后利用这些信息,并依据提出的规范Huffman树的编码性质,便能直接得到Huffman编码.与传统的Huffman算法及近年来国内外文献中提出的改进算法相比,由于该方法不需要构造Huffman树,不仅使内存需求大大减少,而且编码步骤和相关操作更简洁,因而更利于程序的实现和移植.更重要的是,该算法思路为Huffman算法的研究和发展提供了新的途径.

  14. How to Make up the Unique Huffman Tree and Huffman Code%如何构造唯一的huffman树及唯一的huffman编码

    Institute of Scientific and Technical Information of China (English)

    王森

    2003-01-01

    本文论述了在某种特殊的情况下,如何构造一棵huffman树,并使这棵树变得唯一;如何通过唯一的huffman树构造出huffman编码,使每个huffman编码代表唯一的信息单元.

  15. A Global-Scale Image Lossless Compression Method Based on QTM Pixels

    Institute of Scientific and Technical Information of China (English)

    SUN Wen-bin; ZHAO Xue-sheng

    2006-01-01

    In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbor pixels. An algorithm of ascertaining available QTM neighbors is also proposed. Then, the method for reducing space complexities in the procedure of predicting QTM pixel values is presented. Next, the structure for storing compressed QTM pixel is proposed. In the end, the experiment on comparing compression ratio of this method with other methods is carried out by using three wave bands data of 1 km resolution of NOAA images in China. The results indicate that: 1) the compression method performs better than any other, such as Run Length Coding, Arithmetic Coding, Huffman Coding, etc; 2) the average size of compressed three wave band data based on the neighbor QTM pixel predictive model is 31.58% of the origin space requirements and 67.5% of Arithmetic Coding without predictive model.

  16. 空时相关MIMO信道下的空时联合Huffman有限反馈预编码%Joint space-time Huffman limited feedback precoding for spatially and temporally correlated MIMO channels

    Institute of Scientific and Technical Information of China (English)

    居美艳; 葛欣; 李岳衡; 谭国平

    2013-01-01

    For the MIMO channels with space correlation and time correlation, a novel joint space-time Huffman limited feedback precoding scheme was proposed which improves the system performance and reduces the amount of feedback. Based on space correlation, the precoding structure under zero-forcing (ZF) criterion was derived and the rotating quan-tization codebook was designed which reduces the effect of space correlation on system performance. In addition, in view of time correlation of channels, the scheme reduces the feedback data of channel state information (CSI) in the slow fad-ing channel by using neighborhood-based limited feedback. Due to different probabilities of codewords in the neighbor-hood, Huffman coding was adopted to further reduce the amount of feedback.%针对空时相关的 MIMO 信道,提出了一种新颖的 Huffman 空时联合有限反馈预编码方法,提高了系统性能,并减少了反馈量。从信道的空间相关性出发,推导了迫零准则下预编码的构成,从而设计了一种旋转量化码本,减小了空间相关性对系统性能的影响。另外,针对信道的时间相关性,利用基于邻域的有限反馈来降低慢衰落信道的反馈量。同时,由于领域内各码字被选中的概率不同,可以利用Huffman编码进一步减少反馈量。

  17. An Efficient Method for Verifying Gyrokinetic Microstability Codes

    Science.gov (United States)

    Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.

    2009-11-01

    Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.

  18. Application of grammar-based codes for lossless compression of digital mammograms

    Science.gov (United States)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  19. A code for hadrontherapy treatment planning with the voxelscan method.

    Science.gov (United States)

    Berga, S; Bourhaleb, F; Cirio, R; Derkaoui, J; Gallice, B; Hamal, M; Marchetto, F; Rolando, V; Viscomi, S

    2000-11-01

    A code for the implementation of treatment plannings in hadrontherapy with an active scan beam is presented. The package can determine the fluence and energy of the beams for several thousand voxels in a few minutes. The performances of the program have been tested with a full simulation.

  20. P-adic arithmetic coding

    CERN Document Server

    Rodionov, Anatoly

    2007-01-01

    A new incremental algorithm for data compression is presented. For a sequence of input symbols algorithm incrementally constructs a p-adic integer number as an output. Decoding process starts with less significant part of a p-adic integer and incrementally reconstructs a sequence of input symbols. Algorithm is based on certain features of p-adic numbers and p-adic norm. p-adic coding algorithm may be considered as of generalization a popular compression technique - arithmetic coding algorithms. It is shown that for p = 2 the algorithm works as integer variant of arithmetic coding; for a special class of models it gives exactly the same codes as Huffman's algorithm, for another special model and a specific alphabet it gives Golomb-Rice codes.

  1. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  2. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    Science.gov (United States)

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C

    2015-02-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Modified symmetrical reversible variable length code and its theoretical bounds

    Science.gov (United States)

    Tsai, Chien-Wu; Wu, Ja-Ling; Liu, Shu-Wei

    2000-04-01

    The reversible variable length codes (RVLCs) have been adopted in the emerging video coding standards -- H.263+ and MPEG- 4, to enhance their error-resilience capability which is important and essential in the error-prone environments. The most appealing advantage of symmetrical RVLCs compared with asymmetrical RVLCs is that only one code table is required to forward and backward decoding, however, two code tables are required for asymmetrical RVLCs. In this paper, we propose a simple and efficient algorithm that can produce a symmetrical RVLC from a given Huffman code, and we also discuss theoretical bounds of the proposed symmetrical RVLCs.

  4. Determination of problematic ICD-9-CM subcategories for further study of coding performance: Delphi method.

    Science.gov (United States)

    Zeng, Xiaoming; Bell, Paul D

    2011-01-01

    In this study, we report on a qualitative method known as the Delphi method, used in the first part of a research study for improving the accuracy and reliability of ICD-9-CM coding. A panel of independent coding experts interacted methodically to determine that the three criteria to identify a problematic ICD-9-CM subcategory for further study were cost, volume, and level of coding confusion caused. The Medicare Provider Analysis and Review (MEDPAR) 2007 fiscal year data set as well as suggestions from the experts were used to identify coding subcategories based on cost and volume data. Next, the panelists performed two rounds of independent ranking before identifying Excisional Debridement as the subcategory that causes the most confusion among coders. As a result, they recommended it for further study aimed at improving coding accuracy and variation. This framework can be adopted at different levels for similar studies in need of a schema for determining problematic subcategories of code sets.

  5. Interleaver Design Method for Turbo Codes Based on Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    Tan Ying; Sun Hong; Zhou Huai-bei

    2004-01-01

    This paper describes a new interleaver construction technique for turbo code. The technique searches as much as possible pseudo-random interleaving patterns under a certain condition using genetic algorithms(GAs). The new interleavers have the superiority of the S-random interleavers and this interleaver construction technique can reduce the time taken to generate pseudo-random interleaving patterns under a certain condition. Tbe results obtained indicate that the new interleavers yield an equal to or better performance than the Srandom interleavers. Compared to the S-random interleaver,this design requires a lower level of computational complexity.

  6. Development of Continuous-Energy Eigenvalue Sensitivity Coefficient Calculation Methods in the Shift Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, Christopher M [ORNL; Martin, William R [University of Michigan; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL

    2012-01-01

    Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.

  7. The Permutation Groups and the Equivalence of Cyclic and Quasi-Cyclic Codes

    CERN Document Server

    Guenda, Kenza

    2010-01-01

    We give the class of finite groups which arise as the permutation groups of cyclic codes over finite fields. Furthermore, we extend the results of Brand and Huffman et al. and we find the properties of the set of permutations by which two cyclic codes of length p^r can be equivalent. We also find the set of permutations by which two quasi-cyclic codes can be equivalent.

  8. The Optimal Fix-Free Code for Anti-Uniform Sources

    Directory of Open Access Journals (Sweden)

    Ali Zaghian

    2015-03-01

    Full Text Available An \\(n\\ symbol source which has a Huffman code with codelength vector \\(L_{n}=(1,2,3,\\cdots,n-2,n-1,n-1\\ is called an anti-uniform source. In this paper, it is shown that for this class of sources, the optimal fix-free code and symmetric fix-free code is \\( C_{n}^{*}=(0,11,101,1001,\\cdots,1\\overbrace{0\\cdots0}^{n-2}1.

  9. On the optimality of code options for a universal noiseless coder

    Science.gov (United States)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  10. Deblurring, Localization and Geometry Correction of 2D QR Bar Codes Using Richardson Lucy Method

    Directory of Open Access Journals (Sweden)

    Manpreet Kaur

    2014-09-01

    Full Text Available This paper includes the recognition of 2D QR bar codes. This paper describes the deblurring, localization and geometry correction of 2D QR bar codes. The images captured are blurred due motion between the image and the camera. Hence the image containing the QR barcode cannot be read by QR reader. To make the QR barcode readable the images are need to be deblurred. Lucy Richardson method and Weiner Deconvolution Method is used to deblurr and localize the bar code. From both of the methods Lucy Richardson Method is best because this method takes less time for execution than the other method. Simulink Model is used for the Geometry correction of the QR bar code. In future, we would like to investigate the generalization of our algorithm to handle more complicated motion blur.

  11. A System Call Randomization Based Method for Countering Code-Injection Attacks

    Directory of Open Access Journals (Sweden)

    Zhaohui Liang

    2009-10-01

    Full Text Available Code-injection attacks pose serious threat to today’s Internet. The existing code-injection attack defense methods have some deficiencies on performance overhead and effectiveness. To this end, we propose a method that uses system called randomization to counter code injection attacks based on instruction set randomization idea. System calls must be used when an injected code would perform its actions. By creating randomized system calls of the target process, an attacker who does not know the key to the randomization algorithm will inject code that isn’t randomized like as the target process and is invalid for the corresponding de-randomized module. The injected code would fail to execute without calling system calls correctly. Moreover, with extended complier, our method creates source code randomization during its compiling and implements binary executable files randomization by feature matching. Our experiments on built prototype show that our method can effectively counter variety code injection attacks with low-overhead.

  12. A study of transonic aerodynamic analysis methods for use with a hypersonic aircraft synthesis code

    Science.gov (United States)

    Sandlin, Doral R.; Davis, Paul Christopher

    1992-01-01

    A means of performing routine transonic lift, drag, and moment analyses on hypersonic all-body and wing-body configurations were studied. The analysis method is to be used in conjunction with the Hypersonic Vehicle Optimization Code (HAVOC). A review of existing techniques is presented, after which three methods, chosen to represent a spectrum of capabilities, are tested and the results are compared with experimental data. The three methods consist of a wave drag code, a full potential code, and a Navier-Stokes code. The wave drag code, representing the empirical approach, has very fast CPU times, but very limited and sporadic results. The full potential code provides results which compare favorably to the wind tunnel data, but with a dramatic increase in computational time. Even more extreme is the Navier-Stokes code, which provides the most favorable and complete results, but with a very large turnaround time. The full potential code, TRANAIR, is used for additional analyses, because of the superior results it can provide over empirical and semi-empirical methods, and because of its automated grid generation. TRANAIR analyses include an all body hypersonic cruise configuration and an oblique flying wing supersonic transport.

  13. Improving the efficiency of the genetic code by varying the codon length--the perfect genetic code.

    Science.gov (United States)

    Doig, A J

    1997-10-07

    The function of DNA is to specify protein sequences. The four-base "alphabet" used in nucleic acids is translated to the 20 base alphabet of proteins (plus a stop signal) via the genetic code. The code is neither overlapping nor punctuated, but has mRNA sequences read in successive triplet codons until reaching a stop codon. The true genetic code uses three bases for every amino acid. The efficiency of the genetic code can be significantly increased if the requirement for a fixed codon length is dropped so that the more common amino acids have shorter codon lengths and rare amino acids have longer codon lengths. More efficient codes can be derived using the Shannon-Fano and Huffman coding algorithms. The compression achieved using a Huffman code cannot be improved upon. I have used these algorithms to derive efficient codes for representing protein sequences using both two and four bases. The length of DNA required to specify the complete set of protein sequences could be significantly shorter if transcription used a variable codon length. The restriction to a fixed codon length of three bases means that it takes 42% more DNA than the minimum necessary, and the genetic code is 70% efficient. One can think of many reasons why this maximally efficient code has not evolved: there is very little redundancy so almost any mutation causes an amino acid change. Many mutations will be potentially lethal frame-shift mutations, if the mutation leads to a change in codon length. It would be more difficult for the machinery of transcription to cope with a variable codon length. Nevertheless, in the strict and narrow sense of coding for protein sequences using the minimum length of DNA possible, the Huffman code derived here is perfect.

  14. A decoding method of an n length binary BCH code through (n + 1n length binary cyclic code

    Directory of Open Access Journals (Sweden)

    TARIQ SHAH

    2013-09-01

    Full Text Available For a given binary BCH code Cn of length n = 2 s - 1 generated by a polynomial of degree r there is no binary BCH code of length (n + 1n generated by a generalized polynomial of degree 2r. However, it does exist a binary cyclic code C (n+1n of length (n + 1n such that the binary BCH code Cn is embedded in C (n+1n . Accordingly a high code rate is attained through a binary cyclic code C (n+1n for a binary BCH code Cn . Furthermore, an algorithm proposed facilitates in a decoding of a binary BCH code Cn through the decoding of a binary cyclic code C (n+1n , while the codes Cn and C (n+1n have the same minimum hamming distance.

  15. Compatibility of global environmental assessment methods of buildings with an Egyptian energy code

    Directory of Open Access Journals (Sweden)

    Amal Kamal Mohamed Shamseldin

    2017-04-01

    Full Text Available Several environmental assessment methods of buildings had emerged over the world to set environmental classifications for buildings, such as the American method “Leadership in Energy and Environmental Design” (LEED the most widespread one. Several countries decided to put their own assessment methods to catch up with the previous orientation, such as Egypt. The main goal of putting the Egyptian method was to impose the voluntary local energy efficiency codes. Through a local survey, it was clearly noted that many of the construction makers in Egypt do not even know the local method, and whom are interested in the environmental assessment of buildings seek to apply LEED rather than anything else. Therefore, several questions appear about the American method compatibility with the Egyptian energy codes – that contain the most exact characteristics and requirements and give the outmost credible energy efficiency results for buildings in Egypt-, and the possibility of finding another global method that gives closer results to those of the Egyptian codes, especially with the great variety of energy efficiency measurement approaches used among the different assessment methods. So, the researcher is trying to find the compatibility of using non-local assessment methods with the local energy efficiency codes. Thus, if the results are not compatible, the Egyptian government should take several steps to increase the local building sector awareness of the Egyptian method to benefit these codes, and it should begin to enforce it within the building permits after a proper guidance and feedback.

  16. A Generic Top-Down Dynamic-Programming Approach to Prefix-Free Coding

    CERN Document Server

    Golin, Mordecai; Yu, Jiajin

    2008-01-01

    Given a probability distribution over a set of n words to be transmitted, the Huffman Coding problem is to find a minimal-cost prefix free code for transmitting those words. The basic Huffman coding problem can be solved in O(n log n) time but variations are more difficult. One of the standard techniques for solving these variations utilizes a top-down dynamic programming approach. In this paper we show that this approach is amenable to dynamic programming speedup techniques, permitting a speedup of an order of magnitude for many algorithms in the literature for such variations as mixed radix, reserved length and one-ended coding. These speedups are immediate implications of a general structural property that permits batching together the calculation of many DP entries.

  17. Source Code Plagiarism Detection Method Using Protégé Built Ontologies

    Directory of Open Access Journals (Sweden)

    Ion SMEUREANU

    2013-01-01

    Full Text Available Software plagiarism is a growing and serious problem that affects computer science universities in particular and the quality of education in general. More and more students tend to copy their thesis’s software from older theses or internet databases. Checking source codes manually, to detect if they are similar or the same, is a laborious and time consuming job, maybe even impossible due to existence of large digital repositories. Ontology is a way of describing a document’s semantic, so it can be easily used for source code files too. OWL Web Ontology Language could find its applicability in describing both vocabulary and taxonomy of a programming language source code. SPARQL is a query language based on SQL that extracts saved or deducted information from ontologies. Our paper proposes a source code plagiarism detection method, based on ontologies created using Protégé editor, which can be applied in scanning students' theses' software source code.

  18. Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors

    Energy Technology Data Exchange (ETDEWEB)

    Sale, D.; Jonkman, J.; Musial, W.

    2009-08-01

    This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.

  19. Comparison study of EMG signals compression by methods transform using vector quantization, SPIHT and arithmetic coding.

    Science.gov (United States)

    Ntsama, Eloundou Pascal; Colince, Welba; Ele, Pierre

    2016-01-01

    In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT.

  20. Development of continuous-energy eigenvalue sensitivity coefficient calculation methods in the shift Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, C.; Martin, W. [Univ. of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109-2104 (United States); Rearden, B.; Williams, M. [Oak Ridge National Laboratory, Reactor and Nuclear Systems Div., Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)

    2012-07-01

    Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)

  1. Adaptive bit truncation and compensation method for EZW image coding

    Science.gov (United States)

    Dai, Sheng-Kui; Zhu, Guangxi; Wang, Yao

    2003-09-01

    The embedded zero-tree wavelet algorithm (EZW) is widely adopted to compress wavelet coefficients of images with the property that the bits stream can be truncated and produced anywhere. The lower bit plane of the wavelet coefficents is verified to be less important than the higher bit plane. Therefore it can be truncated and not encoded. Based on experiments, a generalized function, which can provide a glancing guide for EZW encoder to intelligently decide the number of low bit plane to be truncated, is deduced in this paper. In the EZW decoder, a simple method is presented to compensate for the truncated wavelet coefficients, and finally it can surprisingly enhance the quality of reconstructed image and spend scarcely any additional cost at the same time.

  2. Status of SFR Codes and Methods QA Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, Acacia J. [Argonne National Lab. (ANL), Argonne, IL (United States); Briggs, Laural L. [Argonne National Lab. (ANL), Argonne, IL (United States); Fanning, Thomas H. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-01-31

    This report details development of the SAS4A/SASSYS-1 SQA Program and describes the initial stages of Program implementation planning. The provisional Program structure, which is largely focused on the establishment of compliant SQA documentation, is outlined in detail, and Program compliance with the appropriate SQA requirements is highlighted. Additional program activities, such as improvements to testing methods and Program surveillance, are also described in this report. Given that the programmatic resources currently granted to development of the SAS4A/SASSYS-1 SQA Program framework are not sufficient to adequately address all SQA requirements (e.g. NQA-1, NUREG/BR-0167, etc.), this report also provides an overview of the gaps that remain the SQA program, and highlights recommendations on a path forward to resolution of these issues. One key finding of this effort is the identification of the need for an SQA program sustainable over multiple years within DOE annual R&D funding constraints.

  3. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  4. DCT domain filtering method for multi-antenna code acquisition

    Institute of Scientific and Technical Information of China (English)

    Xiaojie Li; Luping Xu; Shibin Song; Hua Zhang

    2013-01-01

    For global navigation satel ite system (GNSS) signals in Gaussian and Rayleigh fading channel, a novel signal detection al-gorithm is proposed. Under the low frequency uncertainty case, af-ter performing discrete cosine transform (DCT) to the outputs of the partial matched filter (PMF) for every antenna, the high order com-ponents in the transforming domain wil be filtered, then the equal-gain (EG) combination for the inverse discrete cosine transform (IDCT) reconstructed signal would be done subsequently. Thus, due to the different frequency distribution characteristics between the noise and signals, after EG combination, the energy of signals has almost no loss and the noise energy is greatly reduced. The theoretical analysis and simulation results show that the detection algorithm can effectively improve the signal-to-noise ratio of the captured signal and increase the probability of detection under the same false alarm probability. In addition, it should be pointed out that this method can also be applied to Rayleigh fading channels with moving antenna.

  5. On the efficiency and accuracy of interpolation methods for spectral codes

    NARCIS (Netherlands)

    Hinsberg, van M.A.T.; Thije Boonkkamp, ten J.H.M.; Toschi, F.; Clercx, H.J.H.

    2012-01-01

    In this paper a general theory for interpolation methods on a rectangular grid is introduced. By the use of this theory an efficient B-spline-based interpolation method for spectral codes is presented. The theory links the order of the interpolation method with its spectral properties. In this way m

  6. Coding technique with progressive reconstruction based on VQ and entropy coding applied to medical images

    Science.gov (United States)

    Martin-Fernandez, Marcos; Alberola-Lopez, Carlos; Guerrero-Rodriguez, David; Ruiz-Alzola, Juan

    2000-12-01

    In this paper we propose a novel lossless coding scheme for medical images that allows the final user to switch between a lossy and a lossless mode. This is done by means of a progressive reconstruction philosophy (which can be interrupted at will) so we believe that our scheme gives a way to trade off between the accuracy needed for medical diagnosis and the information reduction needed for storage and transmission. We combine vector quantization, run-length bit plane and entropy coding. Specifically, the first step is a vector quantization procedure; the centroid codes are Huffman- coded making use of a set of probabilities that are calculated in the learning phase. The image is reconstructed at the coder in order to obtain the error image; this second image is divided in bit planes, which are then run-length and Huffman coded. A second statistical analysis is performed during the learning phase to obtain the parameters needed in this final stage. Our coder is currently trained for hand-radiographs and fetal echographies. We compare our results for this two types of images to classical results on bit plane coding and the JPEG standard. Our coder turns out to outperform both of them.

  7. A Low Complexity VCS Method for PAPR Reduction in Multicarrier Code Division Multiple Access

    Institute of Scientific and Technical Information of China (English)

    Si-Si Liu; Yue Xiao; Qing-Song Wen; Shao-Qian Li

    2007-01-01

    This paper investigatesa peak to average power ratio (PAPR) reduction method in multicarrier code division multiple access (MC-CDMA) system. Variable code sets (VCS), a spreading codes selection scheme, can improve the PAPR property of the MC-CDMA signals, but this technique requires an exhaustive search over the combinations of spreading code sets. It is observed that when the number of active users increases, the search complexity will increase exponentially. Based on this fact, we propose a low complexity VCS (LC-VCS) method to reduce the computational complexity. The basic idea of LC-VCS is to derive new signals using the relationship between candidature signals. Simulation results show that the proposed approach can reduce PAPR with lower computational complexity. In addition, it can be blindly received without any side information.

  8. Hierarchical Symbolic Analysis of Large Analog Circuits with Totally Coded Method

    Institute of Scientific and Technical Information of China (English)

    XU Jing-bo

    2006-01-01

    Symbolic analysis has many applications in the design of analog circuits. Existing approaches rely on two forms of symbolic-expression representation: expanded sum-ofproduct form and arbitrarily nested form. Expanded form suffers the problem that the number of product terms grows exponentially with the size of a circuit. Nested form is neither canonical nor amenable to symbolic manipulation. In this paper, we present a new approach to exact and canonical symbolic analysis by exploiting the sparsity and sharing of product terms. This algorithm, called totally coded method (TCM), consists of representing the symbolic determinant of a circuit matrix by code series and performing symbolic analysis by code manipulation. We describe an efficient code-ordering heuristic and prove that it is optimum for ladder-structured circuits. For practical analog circuits, TCM not only covers all advantages of the algorithm via determinant decision diagrams (DDD) but is more simple and efficient than DDD method.

  9. Wavelet based hierarchical coding scheme for radar image compression

    Science.gov (United States)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng

    2007-12-01

    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  10. Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua

    2016-02-15

    When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.

  11. Practical Entanglement Distillation Scheme Using Recurrence Method And Quantum Low Density Parity Check Codes

    CERN Document Server

    Chau, H F

    2009-01-01

    Many entanglement distillation schemes use either universal random hashing or breading as their final step to obtain shared almost perfect EPR pairs. Both methods involve random stabilizer quantum error-correcting codes whose syndromes can be measured using simple and efficient quantum circuits. When applied to high fidelity Werner states, the highest yield protocol among those using local Bell measurements and local unitary operations is the one that uses a certain breading method. And random hashing method losses to breading just by a thin margin. In spite of their high yield, the hardness of decoding random linear code makes the use of random hashing and breading infeasible in practice. In this pilot study, we analyze the performance of recurrence method, a well-known entanglement distillation scheme, by replacing the final random hashing or breading procedure by various efficiently decodable quantum codes. We find that among all the replacements we have investigated, the one using a certain adaptive quant...

  12. Verification & Validation Toolkit to Assess Codes: Is it Theory Limitation, Numerical Method Inadequacy, Bug in the Code or a Serious Flaw?

    Science.gov (United States)

    Bombardelli, F. A.; Zamani, K.

    2014-12-01

    We introduce and discuss an open-source, user friendly, numerical post-processing piece of software to assess reliability of the modeling results of environmental fluid mechanics' codes. Verification and Validation, Uncertainty Quantification (VAVUQ) is a toolkit developed in Matlab© for general V&V proposes. In this work, The VAVUQ implementation of V&V techniques and user interfaces would be discussed. VAVUQ is able to read Excel, Matlab, ASCII, and binary files and it produces a log of the results in txt format. Next, each capability of the code is discussed through an example: The first example is the code verification of a sediment transport code, developed with the Finite Volume Method, with MES. Second example is a solution verification of a code for groundwater flow, developed with the Boundary Element Method, via MES. Third example is a solution verification of a mixed order, Compact Difference Method code of heat transfer via MMS. Fourth example is a solution verification of a 2-D, Finite Difference Method code of floodplain analysis via Complete Richardson Extrapolation. In turn, application of VAVUQ in quantitative model skill assessment studies (validation) of environmental codes is given through two examples: validation of a two-phase flow computational modeling of air entrainment in a free surface flow versus lab measurements and heat transfer modeling in the earth surface versus field measurement. At the end, we discuss practical considerations and common pitfalls in interpretation of V&V results.

  13. SFCVQ and EZW coding method based on Karhunen-Loeve transformation and integer wavelet transformation

    Science.gov (United States)

    Yan, Jingwen; Chen, Jiazhen

    2007-03-01

    A new hyperspectral image compression method of spectral feature classification vector quantization (SFCVQ) and embedded zero-tree of wavelet (EZW) based on Karhunen-Loeve transformation (KLT) and integer wavelet transformation is represented. In comparison with the other methods, this method not only keeps the characteristics of high compression ratio and easy real-time transmission, but also has the advantage of high computation speed. After lifting based integer wavelet and SFCVQ coding are introduced, a system of nearly lossless compression of hyperspectral images is designed. KLT is used to remove the correlation of spectral redundancy as one-dimensional (1D) linear transform, and SFCVQ coding is applied to enhance compression ratio. The two-dimensional (2D) integer wavelet transformation is adopted for the decorrelation of 2D spatial redundancy. EZW coding method is applied to compress data in wavelet domain. Experimental results show that in comparison with the method of wavelet SFCVQ (WSFCVQ), the method of improved BiBlock zero tree coding (IBBZTC) and the method of feature spectral vector quantization (FSVQ), the peak signal-to-noise ratio (PSNR) of this method can enhance over 9 dB, and the total compression performance is improved greatly.

  14. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  15. SFCVQ and EZW coding method based on Karhunen-Loeve transformation and integer wavelet transformation

    Institute of Scientific and Technical Information of China (English)

    Jingwen Yan; Jiazhen Chen

    2007-01-01

    A new hyperspectral image compression method of spectral feature classification vector quantization (SFCVQ) and embedded zero-tree of wavelet (EZW) based on Karhunen-Loeve transformation (KLT) and integer wavelet transformation is represented. In comparison with the other methods, this method not only keeps the characteristics of high compression ratio and easy real-time transmission, but also has the advantage of high computation speed. After lifting based integer wavelet and SFCVQ coding are introduced, a system of nearly lossless compression of hyperspectral images is designed. KLT is used to remove the correlation of spectral redundancy as one-dimensional (1D) linear transform, and SFCVQ coding is applied to enhance compression ratio. The two-dimensional (2D) integer wavelet transformation is adopted for the decorrelation of 2D spatial redundancy. EZW coding method is applied to compress data in wavelet domain. Experimental results show that in comparison with the method of wavelet SFCVQ (WSFCVQ),the method of improved BiBlock zero tree coding (IBBZTC) and the method of feature spectral vector quantization (FSVQ), the peak signal-to-noise ratio (PSNR) of this method can enhance over 9 dB, and the total compression performance is improved greatly.

  16. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  17. How could the replica method improve accuracy of performance assessment of channel coding?

    Science.gov (United States)

    Kabashima, Yoshiyuki

    2009-12-01

    We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.

  18. Methods, algorithms and computer codes for calculation of electron-impact excitation parameters

    CERN Document Server

    Bogdanovich, P; Stonys, D

    2015-01-01

    We describe the computer codes, developed at Vilnius University, for the calculation of electron-impact excitation cross sections, collision strengths, and excitation rates in the plane-wave Born approximation. These codes utilize the multireference atomic wavefunctions which are also adopted to calculate radiative transition parameters of complex many-electron ions. This leads to consistent data sets suitable in plasma modelling codes. Two versions of electron scattering codes are considered in the present work, both of them employing configuration interaction method for inclusion of correlation effects and Breit-Pauli approximation to account for relativistic effects. These versions differ only by one-electron radial orbitals, where the first one employs the non-relativistic numerical radial orbitals, while another version uses the quasirelativistic radial orbitals. The accuracy of produced results is assessed by comparing radiative transition and electron-impact excitation data for neutral hydrogen, helium...

  19. Design of Experiment System for the Source Coding Based on Matlab%基于Matlab的信源编码实验系统的设计

    Institute of Scientific and Technical Information of China (English)

    宋丽丽; 秦艳

    2012-01-01

    Source coding is an important content of Information Theory and Coding course . The source coding experimental system is designed using graphical user interface (GUI) of Matlab. The several method of source coding are realized including Shannon coding, Fenno coding, Huffman coding, uniform encoding and non uniform encoding . It proves that this system has characteristics of easy operation and strong ability of interaction, which offers an effective assistant tool for the experimental teaching.%信源编码是“信息论与编码”课程的重要内容。本文利用Matlab中GUI图形用户界面设计了信源编码的实验系统,实现了几种常用的信源编码方法:香农编码、费诺编码、Huffman编码、均匀编码和非均匀编码。实践证明,该系统具有操作简单和交互性强等特点,为实验教学提供了一个有效的辅助工具。

  20. Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding

    Science.gov (United States)

    Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz

    1997-10-01

    Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.

  1. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  2. A Method and Its Practice for Teaching the Fundamental Technology of Communication Protocols and Coding

    Science.gov (United States)

    Kobayashi, Tetsuji

    The education of information and communication technologies is important for engineering, and it includes terminals, communication media, transmission, switching, software, communication protocols, coding, etc. The proposed teaching method for protocols is based on the HDLC (High-level Data Link Control) procedures using our newly developed software “HDLC trainer” , and includes the extensions for understanding other protocols such as TCP/IP. As for teaching the coding theory that is applied for the error control in protocols, we use both of a mathematical programming language and a general-purpose programming language. We have practiced and evaluated the proposed teaching method in our college, and it is shown that the method has remarkable effects for understanding the fundamental technology of protocols and coding.

  3. An Approach to a Method of Construction of (F, K, 1) Optical Orthogonal Codes from Block Design

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    (F, K, 1) Optical orthogonal codes (OOC) are the best address codes applied to optical code division multiple access (OCDMA) communication systems, but the construction of the codes is very complex. In this paper, a method of construction of the OOC from block design is discussed and a method of computer aid design is presented, by which we can construct desired (F, K, 1) OOC easily.

  4. Introduction into scientific work methods-a necessity when performance-based codes are introduced

    DEFF Research Database (Denmark)

    Dederichs, Anne; Sørensen, Lars Schiøtt

    The introduction of performance-based codes in Denmark in 2004 requires new competences from people working with different aspects of fire safety in the industry and the public sector. This abstract presents an attempt in reducing problems with handling and analysing the mathematical methods...... and CFD models when applying performance-based codes. This is done within the educational program "Master of Fire Safety Engineering" at the department of Civil Engineering at the Technical University of Denmark. It was found that the students had general problems with academic methods. Therefore, a new...

  5. Domain and range decomposition methods for coded aperture x-ray coherent scatter imaging

    Science.gov (United States)

    Odinaka, Ikenna; Kaganovsky, Yan; O'Sullivan, Joseph A.; Politte, David G.; Holmgren, Andrew D.; Greenberg, Joel A.; Carin, Lawrence; Brady, David J.

    2016-05-01

    Coded aperture X-ray coherent scatter imaging is a novel modality for ascertaining the molecular structure of an object. Measurements from different spatial locations and spectral channels in the object are multiplexed through a radiopaque material (coded aperture) onto the detectors. Iterative algorithms such as penalized expectation maximization (EM) and fully separable spectrally-grouped edge-preserving reconstruction have been proposed to recover the spatially-dependent coherent scatter spectral image from the multiplexed measurements. Such image recovery methods fall into the category of domain decomposition methods since they recover independent pieces of the image at a time. Ordered subsets has also been utilized in conjunction with penalized EM to accelerate its convergence. Ordered subsets is a range decomposition method because it uses parts of the measurements at a time to recover the image. In this paper, we analyze domain and range decomposition methods as they apply to coded aperture X-ray coherent scatter imaging using a spectrally-grouped edge-preserving regularizer and discuss the implications of the increased availability of parallel computational architecture on the choice of decomposition methods. We present results of applying the decomposition methods on experimental coded aperture X-ray coherent scatter measurements. Based on the results, an underlying observation is that updating different parts of the image or using different parts of the measurements in parallel, decreases the rate of convergence, whereas using the parts sequentially can accelerate the rate of convergence.

  6. Improved DCT-based image coding and decoding methods for low-bit-rate applications

    Science.gov (United States)

    Jung, Sung-Hwan; Mitra, Sanjit K.

    1994-05-01

    The discrete cosine transform (DCT) is well known for highly efficient coding performance, and it is widely used in many image compression applications. However, in low-bit rate coding, it produces undesirable block artifacts that are visually not pleasing. In addition, in many applications, faster compression and easier VLSI implementation of DCT coefficients are also important issues. The removal of the block artifacts and faster DCT computation are therefore of practical interest. In this paper, we outline a modified DCT computation scheme that provides a simple efficient solution to the reduction of the block artifacts while achieving faster computation. We also derive a similar solution for the efficient computation of the inverse DCT. We have applied the new approach for the low-bit rate coding and decoding of images. Initial simulation results on real images have verified the improved performance obtained using the proposed method over the standard JPEG method.

  7. Implementation of Preconditioned Krylov Subspace Method in MATRA Code for Whole Core Analysis of SMART

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Hyuk; Kim, S. J.; Park, J. P.; Hwang, D. H. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Krylov subspace method was implemented to perform the efficient whole core calculation of SMART with pin by pin subchannel model without lumping channel. The SMART core consisted of 57 fuel assemblies of 17 by 17 arrays with 264 fuel rods and 25 guide tubes and there are total 15,048 fuel rods and 16,780 subchannels. Restarted GMRES and BiCGStab methods are selected among Krylov subspace methods. For the purpose of verifying the implementation of Krylov method, whole core problem is considered under the normal operating condition. In this problem, solving a linear system Aχ = b is considered when A is nearly symmetric and when the system is preconditioned with incomplete LU factorization(ILU). The preconditioner using incomplete LU factorization are among the most effective preconditioners for solving general large, sparse linear systems arising from practical engineering problem. The Krylov subspace method is expected to improve the calculation effectiveness of MATRA code rather than direct method and stationary iteration method such as Gauss elimination and SOR. The present study describes the implementation of Krylov subspace methods with ILU into MATRA code. In this paper, we explore an improved performance of MATRA code for the SMART whole core problems by of Krylov subspace method. For this purpose, two preconditioned Krylov subspace methods, GMRES and BiCGStab, are implemented into the subchannel code MATRA. A typical ILU method is used as the preconditioner. Numerical problems examined in this study indicate that the Krylov subspace method shows the outstanding improvements in the calculation speed and easy convergence.

  8. GPU-accelerated 3D neutron diffusion code based on finite difference method

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Q.; Yu, G.; Wang, K. [Dept. of Engineering Physics, Tsinghua Univ. (China)

    2012-07-01

    Finite difference method, as a traditional numerical solution to neutron diffusion equation, although considered simpler and more precise than the coarse mesh nodal methods, has a bottle neck to be widely applied caused by the huge memory and unendurable computation time it requires. In recent years, the concept of General-Purpose computation on GPUs has provided us with a powerful computational engine for scientific research. In this study, a GPU-Accelerated multi-group 3D neutron diffusion code based on finite difference method was developed. First, a clean-sheet neutron diffusion code (3DFD-CPU) was written in C++ on the CPU architecture, and later ported to GPUs under NVIDIA's CUDA platform (3DFD-GPU). The IAEA 3D PWR benchmark problem was calculated in the numerical test, where three different codes, including the original CPU-based sequential code, the HYPRE (High Performance Pre-conditioners)-based diffusion code and CITATION, were used as counterpoints to test the efficiency and accuracy of the GPU-based program. The results demonstrate both high efficiency and adequate accuracy of the GPU implementation for neutron diffusion equation. A speedup factor of about 46 times was obtained, using NVIDIA's Geforce GTX470 GPU card against a 2.50 GHz Intel Quad Q9300 CPU processor. Compared with the HYPRE-based code performing in parallel on an 8-core tower server, the speedup of about 2 still could be observed. More encouragingly, without any mathematical acceleration technology, the GPU implementation ran about 5 times faster than CITATION which was speeded up by using the SOR method and Chebyshev extrapolation technique. (authors)

  9. Second Generation Wavelet Applied to Lossless Compression Coding of Image%第二代小波应用于图象的无损压缩编码

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the second generation wavelet transform can obtain excellent performance in medical image compression coding.

  10. 2D ArcPIC Code Description: Description of Methods and User / Developer Manual (second edition)

    CERN Document Server

    Sjobak, Kyrre Ness

    2014-01-01

    Vacuum discharges are one of the main limiting factors for future linear collider designs such as that of the Compact LInear Collider (CLIC). To optimize machine efficiency, maintaining the highest feasible accelerating gradient below a certain breakdown rate is desirable; understanding breakdowns can therefore help us to achieve this goal. As a part of ongoing theoretical research on vacuum discharges at the Helsinki Institute of Physics, the build-up of plasma can be investigated through the particle-in-cell method. For this purpose, we have developed the 2D ArcPIC code introduced here. We present an exhaustive description of the 2D ArcPIC code in several parts. In the first chapter, we introduce the particle-in-cell method in general and detail the techniques used in the code. In the second chapter, we describe the code and provide a documentation and derivation of the key equations occurring in it. In the third chapter, we describe utilities for running the code and analyzing the results. The last chapter...

  11. A parallel code base on discontinuous Galerkin method on three dimensional unstructured meshes for MHD equations

    Science.gov (United States)

    Li, Xujing; Zheng, Weiying

    2016-10-01

    A new parallel code based on discontinuous Galerkin (DG) method for hyperbolic conservation laws on three dimensional unstructured meshes is developed recently. This code can be used for simulations of MHD equations, which are very important in magnetic confined plasma research. The main challenges in MHD simulations in fusion include the complex geometry of the configurations, such as plasma in tokamaks, the possibly discontinuous solutions and large scale computing. Our new developed code is based on three dimensional unstructured meshes, i.e. tetrahedron. This makes the code flexible to arbitrary geometries. Second order polynomials are used on each element and HWENO type limiter are applied. The accuracy tests show that our scheme reaches the desired three order accuracy and the nonlinear shock test demonstrate that our code can capture the sharp shock transitions. Moreover, One of the advantages of DG compared with the classical finite element methods is that the matrices solved are localized on each element, making it easy for parallelization. Several simulations including the kink instabilities in toroidal geometry will be present here. Chinese National Magnetic Confinement Fusion Science Program 2015GB110003.

  12. Coarse mesh methods for the transport calculation in the CRONOS reactor code

    Energy Technology Data Exchange (ETDEWEB)

    Fedon-Magnaud, C.; Lautard, J.J.; Akherraz, B.; Wu, G.J. [Commissariat a l`Energie Atomique, Gif sur Yvette (France)

    1995-12-31

    Homogeneous transport methods have been recently implemented in the kinetic code CRONOS dedicated mainly to PWR calculations. Two different methods are presented. The first one is based on the even parity flux formalism and uses finite element spatial discretization and a discrete ordinates angular approximation; the treatment of the anisotropic scattering is described in detail. The second method uses the odd flux as the main unknown, it is closely connected to nodal methods. This method is used to solve two different problems, the simplified PN equations and the exact transport equation using an angular PN expansion. Numerical results are presented for some standard benchmarks and the methods are compared.

  13. Coarse mesh methods for the transport calculation in the Cronos reactor code

    Energy Technology Data Exchange (ETDEWEB)

    Fedon-Magnaud, C.; Lautard, J.J.; Akherraz, B.; Wu, G.J.

    1995-12-31

    Homogeneous transports methods have been recently implemented in the kinetic code CRONOS dedicated mainly to PWR calculations. Two different methods are presented. The first one is based on the even parity flux formalism and uses finite element spatial discretization and a discrete ordinates angular approximation; the treatment of the anisotropic scattering is described in detail. The second method uses the odd flux as the main unknown, it is closely to nodal methods. This method is used to solve different problems, the simplified PN equations and the exact transport equation using an angular PN expansion. Numerical results are presented for some standard benchmarks and the method are compared. (authors). 18 refs., 3 tabs.

  14. Minimum Redundancy Coding for Uncertain Sources

    CERN Document Server

    Baer, Michael B; Charalambous, Charalambos D

    2011-01-01

    Consider the set of source distributions within a fixed maximum relative entropy with respect to a given nominal distribution. Lossless source coding over this relative entropy ball can be approached in more than one way. A problem previously considered is finding a minimax average length source code. The minimizing players are the codeword lengths --- real numbers for arithmetic codes, integers for prefix codes --- while the maximizing players are the uncertain source distributions. Another traditional minimizing objective is the first one considered here, maximum (average) redundancy. This problem reduces to an extension of an exponential Huffman objective treated in the literature but heretofore without direct practical application. In addition to these, this paper examines the related problem of maximal minimax pointwise redundancy and the problem considered by Gawrychowski and Gagie, which, for a sufficiently small relative entropy ball, is equivalent to minimax redundancy. One can consider both Shannon-...

  15. Source reconstruction for neutron coded-aperture imaging: A sparse method.

    Science.gov (United States)

    Wang, Dongming; Hu, Huasi; Zhang, Fengna; Jia, Qinggang

    2017-08-01

    Neutron coded-aperture imaging has been developed as an important diagnostic for inertial fusion studies in recent decades. It is used to measure the distribution of neutrons produced in deuterium-tritium plasma. Source reconstruction is an essential part of the coded-aperture imaging. In this paper, we applied a sparse reconstruction method to neutron source reconstruction. This method takes advantage of the sparsity of the source image. Monte Carlo neutron transport simulations were performed to obtain the system response. An interpolation method was used while obtaining the spatially variant point spread functions on each point of the source in order to reduce the number of point spread functions that needs to be calculated by the Monte Carlo method. Source reconstructions from simulated images show that the sparse reconstruction method can result in higher signal-to-noise ratio and less distortion at a relatively high statistical noise level.

  16. Comparison of different methods used in integral codes to model coagulation of aerosols

    Science.gov (United States)

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.

    2013-09-01

    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  17. SQA of finite element method (FEM) codes used for analyses of pit storage/transport packages

    Energy Technology Data Exchange (ETDEWEB)

    Russel, E. [Lawrence Livermore National Lab., CA (United States)

    1997-11-01

    This report contains viewgraphs on the software quality assurance of finite element method codes used for analyses of pit storage and transport projects. This methodology utilizes the ISO 9000-3: Guideline for application of 9001 to the development, supply, and maintenance of software, for establishing well-defined software engineering processes to consistently maintain high quality management approaches.

  18. 一种码本训练算法%A Method of Training Code Book

    Institute of Scientific and Technical Information of China (English)

    徐军; 叶澄清

    2000-01-01

    This paper proposes a new code training method of VQ based on discussing varies VQ,brings out a math. model,and shows a training algorithm. The experiment results of the image encoding that employs this algorithm demonstrate the efficiency of training.

  19. A lossless compression method for medical image sequences using JPEG-LS and interframe coding.

    Science.gov (United States)

    Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching

    2009-09-01

    Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

  20. Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers

    Energy Technology Data Exchange (ETDEWEB)

    Cole, P. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Halverson, M. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-09-01

    This guidance document was prepared using the input from the meeting summarized in the draft CSI Roadmap to provide Building America research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America innovations arising in and/or stemming from codes, standards, and rating methods.

  1. SPATIALLY SCALABLE RESOLUTION IMAGE CODING METHOD WITH MEMORY OPTIMIZATION BASED ON WAVELET TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    Wang Na; Zhang Li; Zhou Xiao'an; Jia Chuanying; Li Xia

    2005-01-01

    This letter exploits fundamental characteristics of a wavelet transform image to form a progressive octave-based spatial resolution. Each wavelet subband is coded based on zeroblock and quardtree partitioning ordering scheme with memory optimization technique. The method proposed in this letter is of low complexity and efficient for Internet plug-in software.

  2. Performance evaluation of moment-method codes on an Intel iPSC/860 hypercube computer

    Energy Technology Data Exchange (ETDEWEB)

    Klimkowski, K.; Ling, H. (Texas Univ., Austin (United States))

    1993-09-01

    An analytical evaluation is conducted of the performance of a moment-method code on a parallel computer, treating algorithmic complexity costs within the framework of matrix size and the 'subblock-size' matrix-partitioning parameter. A scaled-efficiencies analysis is conducted for the measured computation times of the matrix-fill operation and LU decomposition. 6 refs.

  3. Kernel sparse coding method for automatic target recognition in infrared imagery using covariance descriptor

    Science.gov (United States)

    Yang, Chunwei; Yao, Junping; Sun, Dawei; Wang, Shicheng; Liu, Huaping

    2016-05-01

    Automatic target recognition in infrared imagery is a challenging problem. In this paper, a kernel sparse coding method for infrared target recognition using covariance descriptor is proposed. First, covariance descriptor combining gray intensity and gradient information of the infrared target is extracted as a feature representation. Then, due to the reason that covariance descriptor lies in non-Euclidean manifold, kernel sparse coding theory is used to solve this problem. We verify the efficacy of the proposed algorithm in terms of the confusion matrices on the real images consisting of seven categories of infrared vehicle targets.

  4. Development of Variational Data Assimilation Methods for the MoSST Geodynamo Code

    Science.gov (United States)

    Egbert, G. D.; Erofeeva, S.; Kuang, W.; Tangborn, A.; Dimitrova, L. L.

    2013-12-01

    A range of different approaches to data assimilation for Earth's geodynamo are now being pursued, from sequential schemes based on approximate covariances of various degrees of sophistication, to variational methods for models of varying degrees of physical completeness. While variational methods require development of adjoint (and possible tangent linear) variants on the forward code---a challenging programming task for a fully self-consistent modern dynamo code---this approach may ultimately offer significant advantages. For example, adjoint based variational approaches allow initial, boundary, and forcing terms to be explicitly adjusted to combine data from modern and historical eras into dynamically consistent maps of core state, including flow, buoyancy and magnetic fields. Here we describe development of tangent linear and adjoint codes for the Modular Scalable Self-consistent Three-dimensional (MoSST) geodynamo simulator, and present initial results from simple synthetic data assimilation experiments. Our approach has been to develop the exact linearization and adjoint of the actual discrete functions represented by the computer code. To do this we use a 'divide-and-concur' approach: the code is decomposed as the sequential action of a series of linear and non-linear procedures on specified inputs. Non-linear procedures are first linearized about a pre-computed input background state (derived by running the non-linear forward model), and a tangent linear time-step code is developed. For small perturbations of initial state the linearization appears to remain valid for times comparable to the secular variation time-scale. Adjoints for each linear (or linearized) procedure were then developed and tested separately (for symmetry), and then merged into adjoint procedures of increasing complexity. We have completed development of the adjoint for a serial version of the MoSST code, explore time limits of forward operator linearization, and discuss next steps

  5. A Statistical Method without Training Step for the Classification of Coding Frame in Transcriptome Sequences.

    Science.gov (United States)

    Carels, Nicolas; Frías, Diego

    2013-01-01

    In this study, we investigated the modalities of coding open reading frame (cORF) classification of expressed sequence tags (EST) by using the universal feature method (UFM). The UFM algorithm is based on the scoring of purine bias (Rrr) and stop codon frequencies. UFM classifies ORFs as coding or non-coding through a score based on 5 factors: (i) stop codon frequency; (ii) the product of the probabilities of purines occurring in the three positions of nucleotide triplets; (iii) the product of the probabilities of Cytosine (C), Guanine (G), and Adenine (A) occurring in the 1st, 2nd, and 3rd positions of triplets, respectively; (iv) the probabilities of a G occurring in the 1st and 2nd positions of triplets; and (v) the probabilities of a T occurring in the 1st and an A in the 2nd position of triplets. Because UFM is based on primary determinants of coding sequences that are conserved throughout the biosphere, it is suitable for cORF classification of any sequence in eukaryote transcriptomes without prior knowledge. Considering the protein sequences of the Protein Data Bank (RCSB PDB or more simply PDB) as a reference, we found that UFM classifies cORFs of ≥200 bp (if the coding strand is known) and cORFs of ≥300 bp (if the coding strand is unknown), and releases them in their coding strand and coding frame, which allows their automatic translation into protein sequences with a success rate equal to or higher than 95%. We first established the statistical parameters of UFM using ESTs from Plasmodium falciparum, Arabidopsis thaliana, Oryza sativa, Zea mays, Drosophila melanogaster, Homo sapiens and Chlamydomonas reinhardtii in reference to the protein sequences of PDB. Second, we showed that the success rate of cORF classification using UFM is expected to apply to approximately 95% of higher eukaryote genes that encode for proteins. Third, we used UFM in combination with CAP3 to assemble large EST samples into cORFs that we used to analyze transcriptome

  6. Impulse feature extraction method for machinery fault detection using fusion sparse coding and online dictionary learning

    Directory of Open Access Journals (Sweden)

    Deng Sen

    2015-04-01

    Full Text Available Impulse components in vibration signals are important fault features of complex machines. Sparse coding (SC algorithm has been introduced as an impulse feature extraction method, but it could not guarantee a satisfactory performance in processing vibration signals with heavy background noises. In this paper, a method based on fusion sparse coding (FSC and online dictionary learning is proposed to extract impulses efficiently. Firstly, fusion scheme of different sparse coding algorithms is presented to ensure higher reconstruction accuracy. Then, an improved online dictionary learning method using FSC scheme is established to obtain redundant dictionary and it can capture specific features of training samples and reconstruct the sparse approximation of vibration signals. Simulation shows that this method has a good performance in solving sparse coefficients and training redundant dictionary compared with other methods. Lastly, the proposed method is further applied to processing aircraft engine rotor vibration signals. Compared with other feature extraction approaches, our method can extract impulse features accurately and efficiently from heavy noisy vibration signal, which has significant supports for machinery fault detection and diagnosis.

  7. Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Pamala C.; Halverson, Mark A.

    2013-09-01

    The U.S. Department of Energy’s (DOE) Building America program implemented a new Codes and Standards Innovation (CSI) Team in 2013. The Team’s mission is to assist Building America (BA) research teams and partners in identifying and resolving conflicts between Building America innovations and the various codes and standards that govern the construction of residences. A CSI Roadmap was completed in September, 2013. This guidance document was prepared using the information in the CSI Roadmap to provide BA research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America (BA) innovations arising in and/or stemming from codes, standards, and rating methods. For more information on the BA CSI team, please email: CSITeam@pnnl.gov

  8. Development of improved methods for the LWR lattice physics code EPRI-CELL

    Energy Technology Data Exchange (ETDEWEB)

    Williams, M.L.; Wright, R.Q.; Barhen, J.

    1982-07-01

    A number of improvements have been made by ORNL to the lattice physics code EPRI-CELL (E-C) which is widely used by utilities for analysis of power reactors. The code modifications were made mainly in the thermal and epithermal routines and resulted in improved reactor physics approximations and more efficient running times. The improvements in the thermal flux calculation included implementation of a group-dependent rebalance procedure to accelerate the iterative process and a more rigorous calculation of interval-to-interval collision probabilities. The epithermal resonance shielding methods used in the code have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology.

  9. Two-Level Bregman Method for MRI Reconstruction with Graph Regularized Sparse Coding

    Institute of Scientific and Technical Information of China (English)

    刘且根; 卢红阳; 张明辉

    2016-01-01

    In this paper, a two-level Bregman method is presented with graph regularized sparse coding for highly undersampled magnetic resonance image reconstruction. The graph regularized sparse coding is incorporated with the two-level Bregman iterative procedure which enforces the sampled data constraints in the outer level and up-dates dictionary and sparse representation in the inner level. Graph regularized sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge with a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can consistently reconstruct both simulated MR images and real MR data efficiently, and outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.

  10. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Poole, G.; Heroux, M. [Engineering Applications Group, Eagan, MN (United States)

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  11. Artificial viscosity method for the design of supercritical airfoils. [Analysis code H

    Energy Technology Data Exchange (ETDEWEB)

    McFadden, G.B.

    1979-07-01

    The need for increased efficiency in the use of our energy resources has stimulated applied research in many areas. Recently progress has been made in the field of aerodynamics, where the development of the supercritical wing promises significant savings in the fuel consumption of aircraft operating near the speed of sound. Computational transonic aerodynamics has proved to be a useful tool in the design and evaluation of these wings. A numerical technique for the design of two-dimensional supercritical wing sections with low wave drag is presented. The method is actually a design mode of the analysis code H developed by Bauer, Garabedian, and Korn. This analysis code gives excellent agreement with experimental results and is used widely by the aircraft industry. The addition of a conceptually simple design version should make this code even more useful to the engineering public.

  12. An efficient simulation method of a cyclotron sector-focusing magnet using 2D Poisson code

    Energy Technology Data Exchange (ETDEWEB)

    Gad Elmowla, Khaled Mohamed M; Chai, Jong Seo, E-mail: jschai@skku.edu; Yeon, Yeong H; Kim, Sangbum; Ghergherehchi, Mitra

    2016-10-01

    In this paper we discuss design simulations of a spiral magnet using 2D Poisson code. The Independent Layers Method (ILM) is a new technique that was developed to enable the use of two-dimensional simulation code to calculate a non-symmetric 3-dimensional magnetic field. In ILM, the magnet pole is divided into successive independent layers, and the hill and valley shape around the azimuthal direction is implemented using a reference magnet. The normalization of the magnetic field in the reference magnet produces a profile that can be multiplied by the maximum magnetic field in the hill magnet, which is a dipole magnet made of the hills at the same radius. Both magnets are then calculated using the 2D Poisson SUPERFISH code. Then a fully three-dimensional magnetic field is produced using TOSCA for the original spiral magnet, and the comparison of the 2D and 3D results shows a good agreement between both.

  13. Moving object detection method using H.263 video coded data for remote surveillance systems

    Science.gov (United States)

    Kohno, Atsushi; Hata, Toshihiko; Ozaki, Minoru

    1998-12-01

    This paper describes a moving object detection method using H.263 coded data. For video surveillance systems, it is necessary to detect unusual states because there are a lot of cameras in the system and video surveillance is tedious in normal states. We examine the information extracted from H.263 coded data and propose a method of detecting alarm events from that information. Our method consists of two steps. In the first step, using motion vector information, a moving object can be detected based on the vector's size and the similarities between the vectors in one frame and the two adjoining frames. In the second step, using DCT coefficients, the detection errors caused by the change of the luminous intensity can be eliminated based on the characteristics of the H.263's DCT coefficients. Thus moving objects are detected by analyzing the motion vectors and DCT coefficients, and we present some experimental results that show the effectiveness of our method.

  14. ADAPTIVE ERROR-LIMITING METHOD SUITABLEFOR THE WALSH CODE SHUTTING MULTIPLEXING IN THE MINE MONITOR SYSTEM

    Institute of Scientific and Technical Information of China (English)

    ZhuLiping

    1996-01-01

    Through the analysis for the process of Walsh modulation and demodulation, the adaptive error-limiting method suitable for the Walsh code shutting multiplexing in the mine monitor system is advanced in this article. It is proved by theoretical analysis and circuit experiments that this method is easy to carry out and can not onlyimprove the quality of information transmission but also meet the requirement of thesystem patrol test time without the increasement of system investment.

  15. Based on the MATLAB design of Huffman coding%基于MATLAB的哈夫曼编码设计

    Institute of Scientific and Technical Information of China (English)

    林寿光

    2010-01-01

    利用哈夫曼压缩编码的原理及方法,采用MATLAB软件对两幅图片进行压缩编码程序设计,获得压缩信息及哈夫曼编码表,分析压缩后的图像像素数据及压缩比.结果表明,哈夫曼编码是一种无损压缩编码.

  16. 语音PCM的Huffman编码研究与实现%Realization of PCM to huffman coding for voice

    Institute of Scientific and Technical Information of China (English)

    邓翔宇

    2010-01-01

    传统的模拟语音PCM采用等长折叠二进制编码,其数码率较高,传输和处理所需系统资源较大.文章从语音信号抽样值的概率分布情况出发,在PCM编码的非均匀量化基础上,对13折线A律压扩特性采用变长编码,使信源的熵冗余得以减小,实现了语音MOS值不变情况下的压缩编码,同时,又运用EDA技术对压缩电路进行了基于CPLD的硬件设计.

  17. New IP traceback scheme based on Huffman codes%新的基于Huffman编码的追踪方案

    Institute of Scientific and Technical Information of China (English)

    罗莉莉; 谢冬青; 占勇军; 周再红

    2007-01-01

    面对DDoS攻击,研究人员提出了各种IP追踪技术寻找攻击包的真实源IP地址,但是目前的追踪方案存在着标记过程中的空间问题、追踪源是否准确及追踪所需包的数量等问题.提出一种新的基于Huffman编码的追踪方案,可以节省大量的存储空间,提高空间效率,而且当遭遇DoS(拒绝服务攻击)和DDoS的攻击时能迅速作出反应,仅仅收到一个攻击包即可重构出攻击路径,找到准确的攻击源, 将攻击带来的危害和损失减小到最低程度.

  18. 基于雷达视频的Huffman编码研究%Research on Huffman Code of Radar Vedio

    Institute of Scientific and Technical Information of China (English)

    韩菲

    2004-01-01

    讨论在雷达视频传输中所运用到的数据压缩算法,论述采用霍夫曼码对雷达数据进行编解码,以解决大容量雷达数据传输,满足雷达视频图像数据实时、高速、无损传输的要求.

  19. 赫夫曼编码的求解算法%Improving Algorithm for Finding Huffman-codes

    Institute of Scientific and Technical Information of China (English)

    徐凤生; 钱爱增; 李海军; 李天志

    2007-01-01

    最优二叉树是一种十分重要的数据结构,在通信、工程及软件开发等领域有着广泛的应用.文中对最优二叉树进行探讨的基础上,通过改进最优二叉树和Huffman编码的存储结构,提出了一种求赫夫曼编码的求解算法.通过设计相应的C语言程序验证了算法的有效性.

  20. Introduction into scientific work methods-a necessity when performance-based codes are introduced

    DEFF Research Database (Denmark)

    Dederichs, Anne; Sørensen, Lars Schiøtt

    The introduction of performance-based codes in Denmark in 2004 requires new competences from people working with different aspects of fire safety in the industry and the public sector. This abstract presents an attempt in reducing problems with handling and analysing the mathematical methods...... and CFD models when applying performance-based codes. This is done within the educational program "Master of Fire Safety Engineering" at the department of Civil Engineering at the Technical University of Denmark. It was found that the students had general problems with academic methods. Therefore, a new...... educational moment is introduced as a result of this investigation. The course is positioned in the program prior the work with the final project. In the course a mini project is worked out, in which the students provides extra training in academic methods....

  1. Coding Methods for the NMF Approach to Speech Recognition and Vocabulary Acquisition

    Directory of Open Access Journals (Sweden)

    Meng Sun

    2012-12-01

    Full Text Available This paper aims at improving the accuracy of the non- negative matrix factorization approach to word learn- ing and recognition of spoken utterances. We pro- pose and compare three coding methods to alleviate quantization errors involved in the vector quantization (VQ of speech spectra: multi-codebooks, soft VQ and adaptive VQ. We evaluate on the task of spotting a vocabulary of 50 keywords in continuous speech. The error rates of multi-codebooks decreased with increas- ing number of codebooks, but the accuracy leveled off around 5 to 10 codebooks. Soft VQ and adaptive VQ made a better trade-off between the required memory and the accuracy. The best of the proposed methods reduce the error rate to 1.2% from the 1.9% obtained with a single codebook. The coding methods and the model framework may also prove useful for applica- tions such as topic discovery/detection and mining of sequential patterns.

  2. 基于霍夫曼树和逆云模型的雷达拖引干扰识别%Identification of Radar Pull-off Jamming Based on Huffman Tree and Backward Cloud Model

    Institute of Scientific and Technical Information of China (English)

    李芳; 熊英; 唐斌

    2013-01-01

    A new method is presented to improve the identification rate of radar jamming for the identification of radar pull-off jamming based on Huffman tree and backward cloud model.Firstly,a parameter library is built according to the jamming library,then an identification model based on Huffman tree can be established.Finally the degree of membership is used to identify jamming on each node of the tree.Compared with traditional method,the presented method deals well with the randomness and fuzziness of jamming caused by noise,and identifies jamming effectively when parameters overlap partially.%针对噪声环境中雷达干扰正确识别率较低的问题,提出了一种新的基于霍夫曼树和逆云模型联合的雷达欺骗干扰识别方法.该方法首先利用干扰数据库,提取有效的识别特征参数库,然后基于霍夫曼树建立识别模型.在每个节点,利用基于逆云模型的隶属度分类,实现待测干扰的识别.仿真结果表明,与传统的干扰识别方法相比,该识别方法能很好地应对雷达干扰的随机性和模糊性,能在干扰参数数值区间有重叠时有效识别雷达干扰.

  3. A WYNER-ZIV VIDEO CODING METHOD UTILIZING MIXTURE CORRELATION NOISE MODEL

    Institute of Scientific and Technical Information of China (English)

    Hu Xiaofei; Zhu Xiuchang

    2012-01-01

    In Wyner-Ziv (WZ) Distributed Video Coding (DVC),correlation noise model is often used to describe the error distribution between WZ frame and the side information.The accuracy of the model can influence the performance of the video coder directly.A mixture correlation noise model in Discrete Cosine Transform (DCT) domain for WZ video coding is established in this paper.Different correlation noise estimation method is used for direct current and alternating current coefficients.Parameter estimation method based on expectation maximization algorithm is used to estimate the Laplace distribution center of direct current frequency band and Mixture Laplace-Uniform Distribution Model (MLUDM) is established for alternating current coefficients.Experimental results suggest that the proposed mixture correlation noise model can describe the heavy tail and sudden change of the noise accurately at high rate and make significant improvement on the coding efficiency compared with the noise model presented by DIStributed COding for Video sERvices (DISCOVER).

  4. Investigate Methods to Decrease Compilation Time-AX-Program Code Group Computer Science R& D Project

    Energy Technology Data Exchange (ETDEWEB)

    Cottom, T

    2003-06-11

    Large simulation codes can take on the order of hours to compile from scratch. In Kull, which uses generic programming techniques, a significant portion of the time is spent generating and compiling template instantiations. I would like to investigate methods that would decrease the overall compilation time for large codes. These would be methods which could then be applied, hopefully, as standard practice to any large code. Success is measured by the overall decrease in wall clock time a developer spends waiting for an executable. Analyzing the make system of a slow to build project can benefit all developers on the project. Taking the time to analyze the number of processors used over the life of the build and restructuring the system to maximize the parallelization can significantly reduce build times. Distributing the build across multiple machines with the same configuration can increase the number of available processors for building and can help evenly balance the load. Becoming familiar with compiler options can have its benefits as well. The time improvements of the sum can be significant. Initial compilation time for Kull on OSF1 was {approx} 3 hours. Final time on OSF1 after completion is 16 minutes. Initial compilation time for Kull on AIX was {approx} 2 hours. Final time on AIX after completion is 25 minutes. Developers now spend 3 hours less waiting for a Kull executable on OSF1, and 2 hours less on AIX platforms. In the eyes of many Kull code developers, the project was a huge success.

  5. A Comparison of Natural Language Processing Methods for Automated Coding of Motivational Interviewing.

    Science.gov (United States)

    Tanana, Michael; Hallgren, Kevin A; Imel, Zac E; Atkins, David C; Srikumar, Vivek

    2016-06-01

    Motivational interviewing (MI) is an efficacious treatment for substance use disorders and other problem behaviors. Studies on MI fidelity and mechanisms of change typically use human raters to code therapy sessions, which requires considerable time, training, and financial costs. Natural language processing techniques have recently been utilized for coding MI sessions using machine learning techniques, rather than human coders, and preliminary results have suggested these methods hold promise. The current study extends this previous work by introducing two natural language processing models for automatically coding MI sessions via computer. The two models differ in the way they semantically represent session content, utilizing either 1) simple discrete sentence features (DSF model) and 2) more complex recursive neural networks (RNN model). Utterance- and session-level predictions from these models were compared to ratings provided by human coders using a large sample of MI sessions (N=341 sessions; 78,977 clinician and client talk turns) from 6 MI studies. Results show that the DSF model generally had slightly better performance compared to the RNN model. The DSF model had "good" or higher utterance-level agreement with human coders (Cohen's kappa>0.60) for open and closed questions, affirm, giving information, and follow/neutral (all therapist codes); considerably higher agreement was obtained for session-level indices, and many estimates were competitive with human-to-human agreement. However, there was poor agreement for client change talk, client sustain talk, and therapist MI-inconsistent behaviors. Natural language processing methods provide accurate representations of human derived behavioral codes and could offer substantial improvements to the efficiency and scale in which MI mechanisms of change research and fidelity monitoring are conducted.

  6. A novel quantum LSB-based steganography method using the Gray code for colored quantum images

    Science.gov (United States)

    Heidari, Shahrokh; Farzadnia, Ehsan

    2017-10-01

    As one of the prevalent data-hiding techniques, steganography is defined as the act of concealing secret information in a cover multimedia encompassing text, image, video and audio, imperceptibly, in order to perform interaction between the sender and the receiver in which nobody except the receiver can figure out the secret data. In this approach a quantum LSB-based steganography method utilizing the Gray code for quantum RGB images is investigated. This method uses the Gray code to accommodate two secret qubits in 3 LSBs of each pixel simultaneously according to reference tables. Experimental consequences which are analyzed in MATLAB environment, exhibit that the present schema shows good performance and also it is more secure and applicable than the previous one currently found in the literature.

  7. A method for detecting code security vulnerability based on variables tracking with validated-tree

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    SQL injection poses a major threat to the application level security of the database and there is no systematic solution to these attacks.Different from traditional run time security strategies such as IDS and fire wall,this paper focuses on the solution at the outset;it presents a method to find vulnerabilities by analyzing the source codes.The concept of validated tree is developed to track variables referenced by database operations in scripts.By checking whether these variables are influenced by outside inputs,the database operations are proved to be secure or not.This method has advantages of high accuracy and efficiency as well as low costs,and it is universal to any type of web application platforms.It is implemented by the SOftware code vulnerabilities of SQL injection detector(CVSID).The validity and efficiency are demonstrated with an example.

  8. A novel coding method for gene mutation correction during protein translation process.

    Science.gov (United States)

    Zhang, Lei; Tian, Fengchun; Wang, Shiyuan; Liu, Xiao

    2012-03-07

    In gene expression, gene mutations often lead to negative effect of protein translation in prokaryotic organisms. With consideration of the influences produced by gene mutation, a novel method based on error-correction coding theory is proposed for modeling and detection of translation initiation in this paper. In the proposed method, combined with a one-dimensional codebook from block coding, a decoding method based on the minimum hamming distance is designed for analysis of translation efficiency. The results show that the proposed method can recognize the biologically significant regions such as Shine-Dalgarno region within the mRNA leader sequences effectively. Also, a global analysis of single base and multiple bases mutations of the Shine-Dalgarno sequences are established. Compared with other published experimental methods for mutation analysis, the translation initiation can not be disturbed by multiple bases mutations using the proposed method, which shows the effectiveness of this method in improving the translation efficiency and its biological relevance for genetic regulatory system.

  9. Ultraspectral sounder data compression using the Tunstall coding

    Science.gov (United States)

    Wei, Shih-Chieh; Huang, Bormin; Gu, Lingjia

    2007-09-01

    In an error-prone environment the compression of ultraspectral sounder data is vulnerable to error propagation. The Tungstall coding is a variable-to-fixed length code which compresses data by mapping a variable number of source symbols to a fixed number of codewords. It avoids the resynchronization difficulty encountered in fixed-to-variable length codes such as Huffman coding and arithmetic coding. This paper explores the use of the Tungstall coding in reducing the error propagation for ultraspectral sounder data compression. The results show that our Tunstall approach has a favorable compression ratio compared with JPEG-2000, 3D SPIHT, JPEG-LS, CALIC and CCSDS IDC 5/3. It also has less error propagation compared with JPEG-2000.

  10. Lossless quantum coding in many-letter spaces

    CERN Document Server

    Boström, K J

    2000-01-01

    Based on the concept of many-letter theory a general characterization of quantum codes using the Kraus representation is given. Compression codes are defined by their property of decreasing the average information content of a given a priori message ensemble. Lossless quantum codes, in contrast to lossy codes, provide the retrieval of the original input states with perfect fidelity. A general lossless coding scheme is given that translates between two quantum alphabets. It is shown that this scheme is never compressive. Furthermore, a lossless quantum coding scheme, analog to the classical Huffman scheme but different from the Braunstein scheme, is implemented, which provides optimal compression. Motivated by the concept of lossless quantum compression, an observable is defined that measures the amount of compressible quantum information contained in a particular message with respect to a given \\emph{a priori} message ensemble. The average of this observable yields the von Neumann entropy, which is finally es...

  11. A new design criterion and construction method for space-time trellis codes based on classification of error events

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The known design criterions of Space-Time Trellis Codes (STTC) on slow Rayleigh fading channel are rank, determinant and trace criterion. These criterions are not advantageous not only in operation but also in performance. With classifying the error events of STTC, a new criterion was presented on slow Rayleigh fading channels. Based on the criterion, an effective and straightforward multi-step method is proposed to construct codes with better performance. This method can reduce the computation of search to small enough. Simulation results show that the codes searched by computer have the same or even better performance than the reported codes.

  12. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    Energy Technology Data Exchange (ETDEWEB)

    Courau, T.; Plagne, L.; Ponicot, A. [EDF R and D, 1, Avenue du General de Gaulle, 92141 Clamart Cedex (France); Sjoden, G. [Nuclear and Radiological Engineering, Georgia Inst. of Technology, Atlanta, GA 30332 (United States)

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadrature required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)

  13. The piecewise-linear predictor-corrector code - A Lagrangian-remap method for astrophysical flows

    Science.gov (United States)

    Lufkin, Eric A.; Hawley, John F.

    1993-01-01

    We describe a time-explicit finite-difference algorithm for solving the nonlinear fluid equations. The method is similar to existing Eulerian schemes in its use of operator-splitting and artificial viscosity, except that we solve the Lagrangian equations of motion with a predictor-corrector and then remap onto a fixed Eulerian grid. The remap is formulated to eliminate errors associated with coordinate singularities, with a general prescription for remaps of arbitrary order. We perform a comprehensive series of tests on standard problems. Self-convergence tests show that the code has a second-order rate of convergence in smooth, two-dimensional flow, with pressure forces, gravity, and curvilinear geometry included. While not as accurate on idealized problems as high-order Riemann-solving schemes, the predictor-corrector Lagrangian-remap code has great flexibility for application to a variety of astrophysical problems.

  14. Implementation of discrete transfer radiation method into swift computational fluid dynamics code

    Directory of Open Access Journals (Sweden)

    Baburić Mario

    2004-01-01

    Full Text Available The Computational Fluid Dynamics (CFD has developed into a powerful tool widely used in science, technology and industrial design applications, when ever fluid flow, heat transfer, combustion, or other complicated physical processes, are involved. During decades of development of CFD codes scientists were writing their own codes, that had to include not only the model of processes that were of interest, but also a whole spectrum of necessary CFD procedures, numerical techniques, pre-processing and post-processing. That has arrested much of the scientist effort in work that has been copied many times over, and was not actually producing the added value. The arrival of commercial CFD codes brought relief to many engineers that could now use the user-function approach for mod el ling purposes, en trusting the application to do the rest of the work. This pa per shows the implementation of Discrete Transfer Radiation Method into AVL’s commercial CFD code SWIFT with the help of user defined functions. Few standard verification test cases were per formed first, and in order to check the implementation of the radiation method it self, where the comparisons with available analytic solution could be performed. After wards, the validation was done by simulating the combustion in the experimental furnace at IJmuiden (Netherlands, for which the experimental measurements were available. The importance of radiation prediction in such real-size furnaces is proved again to be substantial, where radiation itself takes the major fraction of over all heat transfer. The oil-combustion model used in simulations was the semi-empirical one that has been developed at the Power Engineering Department, and which is suit able for a wide range of typical oil flames.

  15. Pretreatment Method of Quick Response Code%一种QR码的预处理方法

    Institute of Scientific and Technical Information of China (English)

    杨佳丽; 高美凤

    2011-01-01

    Aiming at the problem that the Quick Response(QR) code by vidicon has asymmetrical beam, angulation and spin, this paper proposes the adaptive threshold method.Combinating the Roberts operator and a wavelet modulus maxima, a new edge detection algorithm overcomes the noise-sensitive defects of traditional algorithm, and accurately extracts the edge information of QR code.The QR code is located according to the principle of quadrilateral with four vertices to the diagonal line parallel of the shortest distance and the bilinear interpolation algorithm is used to correct the distorted QR code.Experimental results show that the method is reliable.%针对摄像机采集的快速响应码(QR码)在提取之前存在光照不均匀以及可能产生的旋转、扭曲等现象,提出一种自适应阈值算法.对图像进行二值化,将Roberts算子与小波模极大值相结合,克服传统边缘检测算法对噪声敏感的缺点,提取QR码的边缘信息.根据四边形4个顶点到与对角线平行的直线的最短距离来定位QR码,并利用双线性差值进行纠正.实验结果证明了该算法的可靠性.

  16. Pressure vessels design methods using the codes, fracture mechanics and multiaxial fatigue

    Directory of Open Access Journals (Sweden)

    Fatima Majid

    2016-10-01

    Full Text Available This paper gives a highlight about pressure vessel (PV methods of design to initiate new engineers and new researchers to understand the basics and to have a summary about the knowhow of PV design. This understanding will contribute to enhance their knowledge in the selection of the appropriate method. There are several types of tanks distinguished by the operating pressure, temperature and the safety system to predict. The selection of one or the other of these tanks depends on environmental regulations, the geographic location and the used materials. The design theory of PVs is very detailed in various codes and standards API, such as ASME, CODAP ... as well as the standards of material selection such as EN 10025 or EN 10028. While designing a PV, we must design the fatigue of its material through the different methods and theories, we can find in the literature, and specific codes. In this work, a focus on the fatigue lifetime calculation through fracture mechanics theory and the different methods found in the ASME VIII DIV 2, the API 579-1 and EN 13445-3, Annex B, will be detailed by giving a comparison between these methods. In many articles in the literature the uniaxial fatigue has been very detailed. Meanwhile, the multiaxial effect has not been considered as it must be. In this paper we will lead a discussion about the biaxial fatigue due to cyclic pressure in thick-walled PV. Besides, an overview of multiaxial fatigue in PVs is detailed

  17. DIFFERENTIAL AMPLITUDE PHASE SHIFT KEYING:A NEW MODULATION METHOD FOR TURBO CODE IN DIGITAL RADIO BROADCASTING

    Institute of Scientific and Technical Information of China (English)

    KhalidH.Sayhood; WuLenan

    2003-01-01

    The multilevel modulation techniques of M-Differential Amplitude Phase Shift Keying(DAPSK)have been proposed in combination with Turbo code scheme for digital radio broad-casting bands below 30 MHz radio channel.Comparison of this modulation method with channel coding in an Additive White Gaussian Noise(AWGN)and mulit-path fading channels has been presented.The analysis provides an iterative decoding of the Turbo code.

  18. DIFFERENTIAL AMPLITUDE PHASE SHIFT KEYING:A NEW MODULATION METHOD FOR TURBO CODE IN DIGITAL RADIO BROADCASTING

    Institute of Scientific and Technical Information of China (English)

    Khalid H. Sayhood; Wu Lenan

    2003-01-01

    The multilevel modulation techniques of M-Differential Amplitude Phase Shift Keying (DAPSK) have been proposed in combination with Turbo code scheme for digital radio broadcasting bands below 30 MHz radio channel. Comparison of this modulation method with channel coding in an Additive White Gaussian Noise (AWGN) and multi-path fading channels has been presented. The analysis provides an iterative decoding of the Turbo code.

  19. A New Region-of-interest Coding Method to Control the Relative Quality of Progressive Decoded Images

    Institute of Scientific and Technical Information of China (English)

    LI Ji-liang; FANG Xiang-zhong; ZHANG Dong-dong

    2007-01-01

    Based on the ideas of controlling relative quality and rearranging bitplanes, a new ROI coding method for JPEG2000 was proposed, which shifts and rearranges bitplanes in units of bitplane groups.It can code arbitrary shaped ROI without shape coding, and reserve almost arbitrary percent of background information.It also can control the relative quality of progressive decoded images.In addition, it is easy to be implemented and has low computational cost.

  20. Fast multiple run_before decoding method for efficient implementation of an H.264/advanced video coding context-adaptive variable length coding decoder

    Science.gov (United States)

    Ki, Dae Wook; Kim, Jae Ho

    2013-07-01

    We propose a fast new multiple run_before decoding method in context-adaptive variable length coding (CAVLC). The transform coefficients are coded using CAVLC, in which the run_before symbols are generated for a 4×4 block input. To speed up the CAVLC decoding, the run_before symbols need to be decoded in parallel. We implemented a new CAVLC table for simultaneous decoding of up to three run_befores. The simulation results show a Total Speed-up Factor of 205%˜144% over various resolutions and quantization steps.

  1. An asynchronous writing method for restart files in the gysela code in prevision of exascale systems*

    Directory of Open Access Journals (Sweden)

    Thomine O.

    2013-12-01

    Full Text Available The present work deals with an optimization procedure developed in the full-f global GYrokinetic SEmi-LAgrangian code (GYSELA. Optimizing the writing of the restart files is necessary to reduce the computing impact of crashes. These files require a very large memory space, and particularly so for very large mesh sizes. The limited bandwidth of the data pipe between the comput- ing nodes and the storage system induces a non-scalable part in the GYSELA code, which increases with the mesh size. Indeed the transfer time of RAM to data depends linearly on the files size. The necessity of non synchronized writing-in-file procedure is therefore crucial. A new GYSELA module has been developed. This asynchronous procedure allows the frequent writ- ing of the restart files, whilst preventing a severe slowing down due to the limited writing bandwidth. This method has been improved to generate a checksum control of the restart files, and automatically rerun the code in case of a crash for any cause.

  2. Development of simulation code for MOX dissolution using silver-mediated electrochemical method (Contract research)

    Energy Technology Data Exchange (ETDEWEB)

    Kida, Takashi; Umeda, Miki; Sugikawa, Susumu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-03-01

    MOX dissolution using silver-mediated electrochemical method will be employed for the preparation of plutonium nitrate solution in the criticality safety experiments in the Nuclear Fuel Cycle Safety Engineering Research Facility (NUCEF). A simulation code for the MOX dissolution has been developed for the operating support. The present report describes the outline of the simulation code, a comparison with the experimental data and a parameter study on the MOX dissolution. The principle of this code is based on the Zundelevich's model for PuO{sub 2} dissolution using Ag(II). The influence of nitrous acid on the material balance of Ag(II) is taken into consideration and the surface area of MOX powder is evaluated by particle size distribution in this model. The comparison with experimental data was carried out to confirm the validity of this model. It was confirmed that the behavior of MOX dissolution could adequately be simulated using an appropriate MOX dissolution rate constant. It was found from the result of parameter studies that MOX particle size was major governing factor on the dissolution rate. (author)

  3. Coding Model and Mapping Method of Spherical Diamond Discrete Grids Based on Icosahedron

    Directory of Open Access Journals (Sweden)

    LIN Bingxian

    2016-12-01

    Full Text Available Discrete Global Grid(DGG provides a fundamental environment for global-scale spatial data's organization and management. DGG's encoding scheme, which blocks coordinate transformation between different coordination reference frames and reduces the complexity of spatial analysis, contributes a lot to the multi-scale expression and unified modeling of spatial data. Compared with other kinds of DGGs, Diamond Discrete Global Grid(DDGG based on icosahedron is beneficial to the spherical spatial data's integration and expression for much better geometric properties. However, its structure seems more complicated than DDGG on octahedron due to its initial diamond's edges cannot fit meridian and parallel. New challenges are posed when it comes to the construction of hierarchical encoding system and mapping relationship with geographic coordinates. On this issue, this paper presents a DDGG's coding system based on the Hilbert curve and designs conversion methods between codes and geographical coordinates. The study results indicate that this encoding system based on the Hilbert curve can express space scale and location information implicitly with the similarity between DDG and planar grid put into practice, and balances efficiency and accuracy of conversion between codes and geographical coordinates in order to support global massive spatial data's modeling, integrated management and all kinds of spatial analysis.

  4. Parallel processing method for two-dimensional Sn transport code DOT3.5

    Energy Technology Data Exchange (ETDEWEB)

    Uematsu, Mikio [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    1998-03-01

    A parallel processing method for the two-dimensional Sn transport code DOT3.5 has been developed to achieve drastic reduction of computation time. In the proposed method, parallelization is made with angular domain decomposition and/or space domain decomposition. Calculational speedup for parallel processing by angular domain decomposition is achieved by minimizing frequency of communications between processing elements. As for parallel processing by space domain decomposition, two-step rescaling method consisting of segmentwise rescaling and the ordinary pointwise rescaling have been developed to accelerate convergence, which will otherwise be degraded because of discontinuity at the segment boundaries. The developed method was examined with a Sun workstation using the PVM message-passing library, and sufficient speedup was observed. (author)

  5. Simple PSF based method for pupil phase mask's optimization in wavefront coding system

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wen-zi; CHEN Yan-ping; ZHAO Ting-yu; YE Zi; YU Fei-hong

    2007-01-01

    By applying the wavefront coding technique to an optical system, the depth of focus can be greatly increased. Several complicated methods, such as Fisher Information based method, have already been taken to optimize for the best pupil phase mask in ideal condition. Here one simple point spread function (PSF) based method with only the standard deviation method used to evaluate the PSF stability over the depth of focus is taken to optimize for the best coefficients of pupil phase mask in practical optical systems. Results of imaging simulations for optical systems with and without pupil phase mask are presented, and the sharpness of image is calculated for comparison. The optimized results showed better and much more stable imaging quality over the original system without changing the position of the image plane.

  6. Implementation of the probability table method in a continuous-energy Monte Carlo code system

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B. [Lockheed Martin Corp., Schenectady, NY (United States)

    1998-10-01

    RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.

  7. Resin Matrix/Fiber Reinforced Composite Material, Ⅱ: Method of Solution and Computer Code

    Institute of Scientific and Technical Information of China (English)

    Li Chensha(李辰砂); Jiao Caishan; Liu Ying; Wang Zhengping; Wang Hongjie; Cao Maosheng

    2003-01-01

    According to a mathematical model which describes the curing process of composites constructed from continuous fiber-reinforced, thermosetting resin matrix prepreg materials, and the consolidation of the composites, the solution method to the model is made and a computer code is developed, which for flat-plate composites cured by a specified cure cycle, provides the variation of temperature distribution, the cure reaction process in the resin, the resin flow and fibers stress inside the composite, the void variation and the residual stress distribution.

  8. Apparatus, Method, and Computer Program for a Resolution-Enhanced Pseudo-Noise Code Technique

    Science.gov (United States)

    Li, Steven X. (Inventor)

    2015-01-01

    An apparatus, method, and computer program for a resolution enhanced pseudo-noise coding technique for 3D imaging is provided. In one embodiment, a pattern generator may generate a plurality of unique patterns for a return to zero signal. A plurality of laser diodes may be configured such that each laser diode transmits the return to zero signal to an object. Each of the return to zero signal includes one unique pattern from the plurality of unique patterns to distinguish each of the transmitted return to zero signals from one another.

  9. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    Science.gov (United States)

    Oliver, A. Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.

  10. Time-dependent Multi-group Multidimensional Relativistic Radiative Transfer Code Based On Spherical Harmonic Discrete Ordinate Method

    CERN Document Server

    Tominaga, Nozomu; Blinnikov, Sergei I

    2015-01-01

    We develop a time-dependent multi-group multidimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) that evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with a ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed frame approach; the source function is evaluated in the comoving frame whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated with various test problems and comparisons with results of a relativistic Monte Carlo code. These validations confirm that the code ...

  11. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.

    Science.gov (United States)

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-07-29

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals.

  12. Application of Fast Multipole Methods to the NASA Fast Scattering Code

    Science.gov (United States)

    Dunn, Mark H.; Tinetti, Ana F.

    2008-01-01

    The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.

  13. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting

    Science.gov (United States)

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-01-01

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals. PMID:27483275

  14. A novel method involving Matlab coding to determine the distribution of a collimated ionizing radiation beam

    Science.gov (United States)

    Ioan, M.-R.

    2016-08-01

    In ionizing radiation related experiments, precisely knowing of the involved parameters it is a very important task. Some of these experiments are involving the use of electromagnetic ionizing radiation such are gamma rays and X rays, others make use of energetic charged or not charged small dimensions particles such are protons, electrons, neutrons and even, in other cases, larger accelerated particles such are helium or deuterium nuclei are used. In all these cases the beam used to hit an exposed target must be previously collimated and precisely characterized. In this paper, a novel method to determine the distribution of the collimated beam involving Matlab coding is proposed. The method was implemented by using of some Pyrex glass test samples placed in the beam where its distribution and dimension must be determined, followed by taking high quality pictures of them and then by digital processing the resulted images. By this method, information regarding the doses absorbed in the exposed samples volume are obtained too.

  15. Simple Strehl ratio based method for pupil phase mask's optimization in wavefront coding system

    Institute of Scientific and Technical Information of China (English)

    Wenzi Zhang; Yanping Chen; Tingyu Zhao; Zi Ye; Feihong Yu

    2006-01-01

    @@ By applying the wavefront coding technique to an optical system,the depth of focus can be greatly increased.Several complicated methods have already been taken to optimize for the best pupil phase mask in ideal condition.Here a simple Strehl ratio based method with only the standard deviation method used to evaluate the Strehl ratio stability over the depth of focus is applied to optimize for the best coefficients of pupil phase mask in practical optical systems.Results of imaging simulations for optical systems with and without pupil phase mask are presented,and the sharpness of image is calculated for comparison.The optimized pupil phase mask shows good results in extending the depth of focus.

  16. Optimization of prostate cancer treatment plans using the adjoint transport method and discrete ordinates codes

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, S.; Henderson, D.L. [Dept. of Medical Physics, Madison, WI (United States); Thomadsen, B.R. [Dept. of Medical Physics and Dept. of Human Oncology, Madison (United States)

    2001-07-01

    Interstitial brachytherapy is a type of radiation in which radioactive sources are implanted directly into cancerous tissue. Determination of dose delivered to tissue by photons emitted from implanted seeds is an important step in the treatment plan process. In this paper we will investigate the use of the discrete ordinates method and the adjoint method to calculate absorbed dose in the regions of interest. MIP (mixed-integer programming) is used to determine the optimal seed distribution that conforms the prescribed dose to the tumor and delivers minimal dose to the sensitive structures. The patient treatment procedure consists of three steps: (1) image acquisition with the transrectal ultrasound (TRUS) and assessing the region of interest, (2) adjoint flux computation with discrete ordinate code for inverse dose calculation, and (3) optimization with the MIP branch-and-bound method.

  17. A dynamical systems proof of Kraft-McMillan inequality and its converse for prefix-free codes

    Science.gov (United States)

    Nagaraj, Nithin

    2009-03-01

    Uniquely decodable codes are central to lossless data compression in both classical and quantum communication systems. The Kraft-McMillan inequality is a basic result in information theory which gives a necessary and sufficient condition for a code to be uniquely decodable and also has a quantum analogue. In this letter, we provide a novel dynamical systems proof of this inequality and its converse for prefix-free codes (no codeword is a prefix of another—the popular Huffman codes are an example). For constrained sources, the problem is still open.

  18. Non-coding RNA detection methods combined to improve usability, reproducibility and precision

    Directory of Open Access Journals (Sweden)

    Kreikemeyer Bernd

    2010-09-01

    Full Text Available Abstract Background Non-coding RNAs gain more attention as their diverse roles in many cellular processes are discovered. At the same time, the need for efficient computational prediction of ncRNAs increases with the pace of sequencing technology. Existing tools are based on various approaches and techniques, but none of them provides a reliable ncRNA detector yet. Consequently, a natural approach is to combine existing tools. Due to a lack of standard input and output formats combination and comparison of existing tools is difficult. Also, for genomic scans they often need to be incorporated in detection workflows using custom scripts, which decreases transparency and reproducibility. Results We developed a Java-based framework to integrate existing tools and methods for ncRNA detection. This framework enables users to construct transparent detection workflows and to combine and compare different methods efficiently. We demonstrate the effectiveness of combining detection methods in case studies with the small genomes of Escherichia coli, Listeria monocytogenes and Streptococcus pyogenes. With the combined method, we gained 10% to 20% precision for sensitivities from 30% to 80%. Further, we investigated Streptococcus pyogenes for novel ncRNAs. Using multiple methods--integrated by our framework--we determined four highly probable candidates. We verified all four candidates experimentally using RT-PCR. Conclusions We have created an extensible framework for practical, transparent and reproducible combination and comparison of ncRNA detection methods. We have proven the effectiveness of this approach in tests and by guiding experiments to find new ncRNAs. The software is freely available under the GNU General Public License (GPL, version 3 at http://www.sbi.uni-rostock.de/moses along with source code, screen shots, examples and tutorial material.

  19. Non-coding RNA detection methods combined to improve usability, reproducibility and precision.

    Science.gov (United States)

    Raasch, Peter; Schmitz, Ulf; Patenge, Nadja; Vera, Julio; Kreikemeyer, Bernd; Wolkenhauer, Olaf

    2010-09-29

    Non-coding RNAs gain more attention as their diverse roles in many cellular processes are discovered. At the same time, the need for efficient computational prediction of ncRNAs increases with the pace of sequencing technology. Existing tools are based on various approaches and techniques, but none of them provides a reliable ncRNA detector yet. Consequently, a natural approach is to combine existing tools. Due to a lack of standard input and output formats combination and comparison of existing tools is difficult. Also, for genomic scans they often need to be incorporated in detection workflows using custom scripts, which decreases transparency and reproducibility. We developed a Java-based framework to integrate existing tools and methods for ncRNA detection. This framework enables users to construct transparent detection workflows and to combine and compare different methods efficiently. We demonstrate the effectiveness of combining detection methods in case studies with the small genomes of Escherichia coli, Listeria monocytogenes and Streptococcus pyogenes. With the combined method, we gained 10% to 20% precision for sensitivities from 30% to 80%. Further, we investigated Streptococcus pyogenes for novel ncRNAs. Using multiple methods--integrated by our framework--we determined four highly probable candidates. We verified all four candidates experimentally using RT-PCR. We have created an extensible framework for practical, transparent and reproducible combination and comparison of ncRNA detection methods. We have proven the effectiveness of this approach in tests and by guiding experiments to find new ncRNAs. The software is freely available under the GNU General Public License (GPL), version 3 at http://www.sbi.uni-rostock.de/moses along with source code, screen shots, examples and tutorial material.

  20. A Simple Method for Guaranteeing ECG Quality in Real-Time Wavelet Lossy Coding

    Directory of Open Access Journals (Sweden)

    Alesanco Álvaro

    2007-01-01

    Full Text Available Guaranteeing ECG signal quality in wavelet lossy compression methods is essential for clinical acceptability of reconstructed signals. In this paper, we present a simple and efficient method for guaranteeing reconstruction quality measured using the new distortion index wavelet weighted PRD (WWPRD, which reflects in a more accurate way the real clinical distortion of the compressed signal. The method is based on the wavelet transform and its subsequent coding using the set partitioning in hierarchical trees (SPIHT algorithm. By thresholding the WWPRD in the wavelet transform domain, a very precise reconstruction error can be achieved thus enabling to obtain clinically useful reconstructed signals. Because of its computational efficiency, the method is suitable to work in a real-time operation, thus being very useful for real-time telecardiology systems. The method is extensively tested using two different ECG databases. Results led to an excellent conclusion: the method controls the quality in a very accurate way not only in mean value but also with a low-standard deviation. The effects of ECG baseline wandering as well as noise in compression are also discussed. Baseline wandering provokes negative effects when using WWPRD index to guarantee quality because this index is normalized by the signal energy. Therefore, it is better to remove it before compression. On the other hand, noise causes an increase in signal energy provoking an artificial increase of the coded signal bit rate. Clinical validation by cardiologists showed that a WWPRD value of 10 preserves the signal quality and thus they recommend this value to be used in the compression system.

  1. A Simple Method for Guaranteeing ECG Quality in Real-Time Wavelet Lossy Coding

    Directory of Open Access Journals (Sweden)

    José García

    2007-01-01

    Full Text Available Guaranteeing ECG signal quality in wavelet lossy compression methods is essential for clinical acceptability of reconstructed signals. In this paper, we present a simple and efficient method for guaranteeing reconstruction quality measured using the new distortion index wavelet weighted PRD (WWPRD, which reflects in a more accurate way the real clinical distortion of the compressed signal. The method is based on the wavelet transform and its subsequent coding using the set partitioning in hierarchical trees (SPIHT algorithm. By thresholding the WWPRD in the wavelet transform domain, a very precise reconstruction error can be achieved thus enabling to obtain clinically useful reconstructed signals. Because of its computational efficiency, the method is suitable to work in a real-time operation, thus being very useful for real-time telecardiology systems. The method is extensively tested using two different ECG databases. Results led to an excellent conclusion: the method controls the quality in a very accurate way not only in mean value but also with a low-standard deviation. The effects of ECG baseline wandering as well as noise in compression are also discussed. Baseline wandering provokes negative effects when using WWPRD index to guarantee quality because this index is normalized by the signal energy. Therefore, it is better to remove it before compression. On the other hand, noise causes an increase in signal energy provoking an artificial increase of the coded signal bit rate. Clinical validation by cardiologists showed that a WWPRD value of 10% preserves the signal quality and thus they recommend this value to be used in the compression system.

  2. A molecular method for a qualitative analysis of potentially coding sequences of DNA

    Directory of Open Access Journals (Sweden)

    M. L. Christoffersen

    Full Text Available Total sequence phylogenies have low information content. Ordinary misconceptions are that character quality can be ignored and that relying on computer algorithms is enough. Despite widespread preference for a posteriori methods of character evaluation, a priori methods are necessary to produce transformation series that are independent of tree topologies. We propose a stepwise qualitative method for analyzing protein sequences. Informative codons are selected, alternative amino acid transformation series are analyzed, and most parsimonious transformations are hypothesized. We conduct four phylogenetic analyses of philodryanine snakes. The tree based on all nucleotides produces least resolution. Trees based on the exclusion of third positions, on an asymmetric step matrix, and on our protocol, produce similar results. Our method eliminates noise by hypothesizing explicit transformation series for each informative protein-coding amino acid. This approaches qualitative methods for morphological data, in which only characters successfully interpreted in a phylogenetic context are used in cladistic analyses. The method allows utilizing character information contained in the original sequence alignment and, therefore, has higher resolution in inferring a phylogenetic tree than some traditional methods (such as distance methods.

  3. Three-dimensional surface reconstruction via a robust binary shape-coded structured light method

    Science.gov (United States)

    Tang, Suming; Zhang, Xu; Song, Zhan; Jiang, Hualie; Nie, Lei

    2017-01-01

    A binary shape-coded structured light method for single-shot three-dimensional reconstruction is presented. The projected structured pattern is composed with eight geometrical shapes with a coding window size of 2×2. The pattern element is designed as rhombic with embedded geometrical shapes. The pattern feature point is defined as the intersection of two adjacent rhombic shapes, and a multitemplate-based feature detector is presented for its robust detection and precise localization. Based on the extracted grid-points, a topological structure is constructed to separate the pattern elements from the obtained image. In the decoding stage, a training dataset is first established from training samples that are collected from a variety of target surfaces. Then, the deep neural network technique is applied for the classification of pattern elements. Finally, an error correction algorithm is introduced based on the epipolar and neighboring constraints to refine the decoding results. The experimental results show that the proposed method not only owns high measurement precision but also has strong robustness to surface color and texture.

  4. Euler technology assessment for preliminary aircraft design employing OVERFLOW code with multiblock structured-grid method

    Science.gov (United States)

    Treiber, David A.; Muilenburg, Dennis A.

    1995-01-01

    The viability of applying a state-of-the-art Euler code to calculate the aerodynamic forces and moments through maximum lift coefficient for a generic sharp-edge configuration is assessed. The OVERFLOW code, a method employing overset (Chimera) grids, was used to conduct mesh refinement studies, a wind-tunnel wall sensitivity study, and a 22-run computational matrix of flow conditions, including sideslip runs and geometry variations. The subject configuration was a generic wing-body-tail geometry with chined forebody, swept wing leading-edge, and deflected part-span leading-edge flap. The analysis showed that the Euler method is adequate for capturing some of the non-linear aerodynamic effects resulting from leading-edge and forebody vortices produced at high angle-of-attack through C(sub Lmax). Computed forces and moments, as well as surface pressures, match well enough useful preliminary design information to be extracted. Vortex burst effects and vortex interactions with the configuration are also investigated.

  5. Advanced Error-Control Coding Methods Enhance Reliability of Transmission and Storage Data Systems

    Directory of Open Access Journals (Sweden)

    K. Vlcek

    2003-04-01

    Full Text Available Iterative coding systems are currently being proposed and acceptedfor many future systems as next generation wireless transmission andstorage systems. The text gives an overview of the state of the art initerative decoded FEC (Forward Error-Correction error-control systems.Such systems can typically achieve capacity to within a fraction of adB at unprecedented low complexities. Using a single code requires verylong code words, and consequently very complex coding system. One wayaround the problem of achieving very low error probabilities is turbocoding (TC application. A general model of concatenated coding systemis shown - an algorithm of turbo codes is given in this paper.

  6. Image sensor dark current elimination system based on DPCM-Huffman compression algorithm%基于DPCM-Huffman压缩算法的图像传感器去暗电流系统

    Institute of Scientific and Technical Information of China (English)

    钟晨峰; 李斌桥; 徐江涛

    2012-01-01

    为了解决图像传感器暗电流消除过程中数据存储问题,本文提出了一种基于DPCM- Huffman压缩算法的数据压缩去暗电流系统并进行硬件实现;在该系统工作之前,首先通过DPCM与Huffman组合压缩算法将图像传感器暗电流数据进行编码压缩,并将压缩后的数据存储于FLASH存储器中.而后在图像传感器工作过程中,通过读取存储器中数据,进行Huffman与DPCM解码,最终消除图像传感器中的暗电流.实验证明,采用该压缩去暗电流系统处理后,以分辨率为256×256的CMOS图像传感器为例,压缩后数据压缩比为3.12,数据量降为原始数据的32%,提高3倍的工作速度.实践证明,本文提出的解压系统提高了数据压缩比,保证了数据精度,提高了图像传感器的工作速度,是一种适用于CMOS图像传感器暗电流消除的压缩系统.%To reliably realize the data storage during the dark current elimination of image sensor, a data compression dark current elimination system which based on the DPCM-Huffman compression algorithm is presented in this paper. It is realized by hardware. Before the system works, the coding compression of the image sensor dark current data is executed with DPCM and Huffman compression algorithm, and then the compressed data is stored in the Flash memory. In the working process of the image sensor, the data in the reader-writer memory is used to carry out Huffman and DPCM decoding to eliminate the dark current in image sensors. The experiment proves that, after the processing by the dark current compression elimination system, taking the CMOS image senor whose revolution is 256X256 as an example, the system's data compression ratio is 3. 12, the quantity of data is decreased by a factor of about 1/3, and the work speed is raised 3 times. Therefore, the decompression system proposed in this paper improved the data compression ratio, ensured the data accuracy and improved the working speed of the image

  7. Algorithmic complexity for psychology: a user-friendly implementation of the coding theorem method.

    Science.gov (United States)

    Gauvrit, Nicolas; Singmann, Henrik; Soler-Toscano, Fernando; Zenil, Hector

    2016-03-01

    Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed coding theorem method the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology.

  8. A morphology screen coding anti-counterfeiting method based on visual characteristics

    Institute of Scientific and Technical Information of China (English)

    ZHAO Li-long; GU Ze-cang; FANG Zhi-liang

    2008-01-01

    A paper information anti-fake and tamper-proofing method based on human visual characteristics and morphology screen coding technology is proposed. Through controlling the distribution of mathematical morphology of screen dot-matrix, warning mark and information are hidden in the background texture. Because of the differences between human vision and the duplicate characteristics of copy machine, warning mark which can not be discriminated by human eyes will emerge after copy. Tampered or fake certificates can be verified by comparing embedded information which is extracted from scanned image of certificate with plain text printed on the certificate. This method is applied in many bills and certificates. Experimental results show that the identification accuracy is above 98%

  9. A New Method Of Gene Coding For A Genetic Algorithm Designed For Parametric Optimization

    Directory of Open Access Journals (Sweden)

    Radu BELEA

    2003-12-01

    Full Text Available In a parametric optimization problem the genes code the real parameters of the fitness function. There are two coding techniques known under the names of: binary coded genes and real coded genes. The comparison between these two is a controversial subject since the first papers about parametric optimization have appeared. An objective analysis regarding the advantages and disadvantages of the two coding techniques is difficult to be done while different format information is compared. The present paper suggests a gene coding technique that uses the same format for both binary coded genes and for the real coded genes. After unifying the real parameters representation, the next criterion is going to be applied: the differences between the two techniques are statistically measured by the effect of the genetic operators over some random generated fellows.

  10. A Novel Container ISO Code Localization Using an Object Clustering Method with Opencv and Visual Studio Application

    Directory of Open Access Journals (Sweden)

    Ronesh Sharma

    2013-06-01

    Full Text Available An automatic container code recognition system is of a great importance to the logistic supply chain management. Techniques have been proposed and implemented for the ISO container code region identification and recognition, however those systems have limitations on the type of container images with illumination factor and marks present on the container due to handling in the mass environmental condition. Moreover the research is not limited for differentiating between different formats of code and color of code characters. In this paper firstly an object clustering method is proposed to localize each line of the container code region. Secondly, the localizing algorithm is implemented with opencv and visual studio to perform localization and then recognition. Thus for real time application, the implemented system has added advantage of being easily integrated with other web application to increase the efficiency of the supply chain management. The experimental results and the application demonstrate the effectiveness of the proposed system for practical use.

  11. Difference and dynamic binarization of binary arithmetic coding%差分动态二进制化的二进制算数编码

    Institute of Scientific and Technical Information of China (English)

    吴江铭

    2013-01-01

    It provides an overview of the high efficiency compression method CABAC proposed in HEVC which will be published by JCT-VC.Then it optimizes the binarization process of binary arithmetic coding by dynamic Huffman coding and makes the difference before the binarization.At last,it demonstrates the experimental results in comparison with the PAQ to validate the efficiency of the new method difference and dynamic binarization of binary arithmetic coding.%JCT-VC组织公布的HEVC协议草案沿用了H264的CABAC,改进了二进制化过程.在阐述高性能压缩算法CABAC的同时,创新性地提出了动态二进制化算数编码,并预先对数据进行差分.最后,通过压缩Java文件实验证实差分动态二进制化算数编码在压缩率方面有较大的提高,高于PAQ和CABAC.

  12. An extensive Markov system for ECG exact coding.

    Science.gov (United States)

    Tai, S C

    1995-02-01

    In this paper, an extensive Markov process, which considers both the coding redundancy and the intersample redundancy, is presented to measure the entropy value of an ECG signal more accurately. It utilizes the intersample correlations by predicting the incoming n samples based on the previous m samples which constitute an extensive Markov process state. Theories of the extensive Markov process and conventional n repeated applications of m-th order Markov process are studied first in this paper. After that, they are realized for ECG exact coding. Results show that a better performance can be achieved by our system. The average code length for the extensive Markov system on the second difference signals was 2.512 b/sample, while the average Huffman code length for the second difference signals was 3.326 b/sample.

  13. On Real-Time and Causal Secure Source Coding

    CERN Document Server

    Kaspi, Yonatan

    2012-01-01

    We investigate two source coding problems with secrecy constraints. In the first problem we consider real--time fully secure transmission of a memoryless source. We show that although classical variable--rate coding is not an option since the lengths of the codewords leak information on the source, the key rate can be as low as the average Huffman codeword length of the source. In the second problem we consider causal source coding with a fidelity criterion and side information at the decoder and the eavesdropper. We show that when the eavesdropper has degraded side information, it is optimal to first use a causal rate distortion code and then encrypt its output with a key.

  14. The FLUKA code for application of Monte Carlo methods to promote high precision ion beam therapy

    CERN Document Server

    Parodi, K; Cerutti, F; Ferrari, A; Mairani, A; Paganetti, H; Sommerer, F

    2010-01-01

    Monte Carlo (MC) methods are increasingly being utilized to support several aspects of commissioning and clinical operation of ion beam therapy facilities. In this contribution two emerging areas of MC applications are outlined. The value of MC modeling to promote accurate treatment planning is addressed via examples of application of the FLUKA code to proton and carbon ion therapy at the Heidelberg Ion Beam Therapy Center in Heidelberg, Germany, and at the Proton Therapy Center of Massachusetts General Hospital (MGH) Boston, USA. These include generation of basic data for input into the treatment planning system (TPS) and validation of the TPS analytical pencil-beam dose computations. Moreover, we review the implementation of PET/CT (Positron-Emission-Tomography / Computed- Tomography) imaging for in-vivo verification of proton therapy at MGH. Here, MC is used to calculate irradiation-induced positron-emitter production in tissue for comparison with the +-activity measurement in order to infer indirect infor...

  15. Solution of the neutronics code dynamic benchmark by finite element method

    Science.gov (United States)

    Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.

    2016-10-01

    The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.

  16. Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets

    Science.gov (United States)

    Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter

    2017-06-01

    The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.

  17. Projectile Two-dimensional Coordinate Measurement Method Based on Optical Fiber Coding Fire and its Coordinate Distribution Probability

    Science.gov (United States)

    Li, Hanshan; Lei, Zhiyong

    2013-01-01

    To improve projectile coordinate measurement precision in fire measurement system, this paper introduces the optical fiber coding fire measurement method and principle, sets up their measurement model, and analyzes coordinate errors by using the differential method. To study the projectile coordinate position distribution, using the mathematical statistics hypothesis method to analyze their distributing law, firing dispersion and probability of projectile shooting the object center were put under study. The results show that exponential distribution testing is relatively reasonable to ensure projectile position distribution on the given significance level. Through experimentation and calculation, the optical fiber coding fire measurement method is scientific and feasible, which can gain accurate projectile coordinate position.

  18. Embedded 3D shape measurement system based on a novel spatio-temporal coding method

    Science.gov (United States)

    Xu, Bin; Tian, Jindong; Tian, Yong; Li, Dong

    2016-11-01

    Structured light measurement has been wildly used since 1970s in industrial component detection, reverse engineering, 3D molding, robot navigation, medical and many other fields. In order to satisfy the demand for high speed, high precision and high resolution 3-D measurement for embedded system, a new patterns combining binary and gray coding principle in space are designed and projected onto the object surface orderly. Each pixel corresponds to the designed sequence of gray values in time - domain, which is treated as a feature vector. The unique gray vector is then dimensionally reduced to a scalar which could be used as characteristic information for binocular matching. In this method, the number of projected structured light patterns is reduced, and the time-consuming phase unwrapping in traditional phase shift methods is avoided. This algorithm is eventually implemented on DM3730 embedded system for 3-D measuring, which consists of an ARM and a DSP core and has a strong capability of digital signal processing. Experimental results demonstrated the feasibility of the proposed method.

  19. An innovative lossless compression method for discrete-color images.

    Science.gov (United States)

    Alzahir, Saif; Borici, Arber

    2015-01-01

    In this paper, we present an innovative method for lossless compression of discrete-color images, such as map images, graphics, GIS, as well as binary images. This method comprises two main components. The first is a fixed-size codebook encompassing 8×8 bit blocks of two-tone data along with their corresponding Huffman codes and their relative probabilities of occurrence. The probabilities were obtained from a very large set of discrete color images which are also used for arithmetic coding. The second component is the row-column reduction coding, which will encode those blocks that are not in the codebook. The proposed method has been successfully applied on two major image categories: 1) images with a predetermined number of discrete colors, such as digital maps, graphs, and GIS images and 2) binary images. The results show that our method compresses images from both categories (discrete color and binary images) with 90% in most case and higher than the JBIG-2 by 5%-20% for binary images, and by 2%-6.3% for discrete color images on average.

  20. Atmospheric Cluster Dynamics Code: a flexible method for solution of the birth-death equations

    Directory of Open Access Journals (Sweden)

    M. J. McGrath

    2012-03-01

    Full Text Available The Atmospheric Cluster Dynamics Code (ACDC is presented and explored. This program was created to study the first steps of atmospheric new particle formation by examining the formation of molecular clusters from atmospherically relevant molecules. The program models the cluster kinetics by explicit solution of the birth–death equations, using an efficient computer script for their generation and the MATLAB ode15s routine for their solution. Through the use of evaporation rate coefficients derived from formation free energies calculated by quantum chemical methods for clusters containing dimethylamine or ammonia and sulphuric acid, we have explored the effect of changing various parameters at atmospherically relevant monomer concentrations. We have included in our model clusters with 0–4 base molecules and 0–4 sulfuric acid molecules for which we have commensurable quantum chemical data. The tests demonstrate that large effects can be seen for even small changes in different parameters, due to the non-linearity of the system. In particular, changing the temperature had a significant impact on the steady-state concentrations of all clusters, while the boundary effects (allowing clusters to grow to sizes beyond the largest cluster that the code keeps track of, or forbidding such processes, coagulation sink terms, non-monomer collisions, sticking probabilities and monomer concentrations did not show as large effects under the conditions studied. Removal of coagulation sink terms prevented the system from reaching the steady state when all the initial cluster concentrations were set to the default value of 1 m−3, which is probably an effect caused by studying only relatively small cluster sizes.

  1. Application of computational fluid dynamics methods to improve thermal hydraulic code analysis

    Science.gov (United States)

    Sentell, Dennis Shannon, Jr.

    A computational fluid dynamics code is used to model the primary natural circulation loop of a proposed small modular reactor for comparison to experimental data and best-estimate thermal-hydraulic code results. Recent advances in computational fluid dynamics code modeling capabilities make them attractive alternatives to the current conservative approach of coupled best-estimate thermal hydraulic codes and uncertainty evaluations. The results from a computational fluid dynamics analysis are benchmarked against the experimental test results of a 1:3 length, 1:254 volume, full pressure and full temperature scale small modular reactor during steady-state power operations and during a depressurization transient. A comparative evaluation of the experimental data, the thermal hydraulic code results and the computational fluid dynamics code results provides an opportunity to validate the best-estimate thermal hydraulic code's treatment of a natural circulation loop and provide insights into expanded use of the computational fluid dynamics code in future designs and operations. Additionally, a sensitivity analysis is conducted to determine those physical phenomena most impactful on operations of the proposed reactor's natural circulation loop. The combination of the comparative evaluation and sensitivity analysis provides the resources for increased confidence in model developments for natural circulation loops and provides for reliability improvements of the thermal hydraulic code.

  2. A NEW DESIGN METHOD OF CDMA SPREADING CODES BASED ON MULTI-RATE UNITARY FILTER BANK

    Institute of Scientific and Technical Information of China (English)

    Bi Jianxin; Wang Yingmin; Yi Kechu

    2001-01-01

    It is well-known that the multi-valued CDMA spreading codes can be designed by means of a pair of mirror multi-rate filter banks based on some optimizing criterion. This paper indicates that there exists a theoretical bound in the performance of its circulating correlation property, which is given by an explicit expression. Based on this analysis, a criterion of maximizing entropy is proposed to design such codes. Computer simulation result suggests that the resulted codes outperform the conventional binary balanced Gold codes for an asynchronous CDMA system.

  3. Two high-density recording methods with run-length limited turbo code for holographic data storage system

    Science.gov (United States)

    Nakamura, Yusuke; Hoshizawa, Taku

    2016-09-01

    Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.

  4. Novel methods in the Particle-In-Cell accelerator Code-Framework Warp

    Energy Technology Data Exchange (ETDEWEB)

    Vay, J-L [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Grote, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cohen, R. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Friedman, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-12-26

    The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.

  5. 嵌入式代码覆盖率统计方法%STATISTICAL METHOD OF EMBEDDED CODE COVERAGE

    Institute of Scientific and Technical Information of China (English)

    周雷

    2014-01-01

    In this paper we explain how to use the supporting code coverage tools GCOV and LCOV of GCC to carry out the coverage statistics on the embedded C language code.Using this method,it can provide measurable indicators for the completion status of embedded code test,and provide the effective data basis for improving the quality of embedded code.%阐述如何利用GCC配套的代码覆盖率工具GCOV和LCOV对C语言嵌入式代码进行覆盖率统计。利用该方法可以为嵌入式代码测试完成情况提供衡量的指标,也为提高单板代码质量提供有效的数据依据。

  6. The Use of Coding Methods to Estimate the Social Behavior Directed toward Peers and Adults of Preschoolers with ASD in TEACCH, LEAP, and Eclectic ''BAU'' Classrooms

    Science.gov (United States)

    Sam, Ann; Reszka, Stephanie; Odom, Samuel; Hume, Kara; Boyd, Brian

    2015-01-01

    Momentary time sampling, partial-interval recording, and event coding are observational coding methods commonly used to examine the social and challenging behaviors of children at risk for or with developmental delays or disabilities. Yet there is limited research comparing the accuracy of and relationship between these three coding methods. By…

  7. REVA Advanced Fuel Design and Codes and Methods - Increasing Reliability, Operating Margin and Efficiency in Operation

    Energy Technology Data Exchange (ETDEWEB)

    Frichet, A.; Mollard, P.; Gentet, G.; Lippert, H. J.; Curva-Tivig, F.; Cole, S.; Garner, N.

    2014-07-01

    Since three decades, AREVA has been incrementally implementing upgrades in the BWR and PWR Fuel design and codes and methods leading to an ever greater fuel efficiency and easier licensing. For PWRs, AREVA is implementing upgraded versions of its HTP{sup T}M and AFA 3G technologies called HTP{sup T}M-I and AFA3G-I. These fuel assemblies feature improved robustness and dimensional stability through the ultimate optimization of their hold down system, the use of Q12, the AREVA advanced quaternary alloy for guide tube, the increase in their wall thickness and the stiffening of the spacer to guide tube connection. But an even bigger step forward has been achieved a s AREVA has successfully developed and introduces to the market the GAIA product which maintains the resistance to grid to rod fretting (GTRF) of the HTP{sup T}M product while providing addition al thermal-hydraulic margin and high resistance to Fuel Assembly bow. (Author)

  8. Good Codes From Generalised Algebraic Geometry Codes

    CERN Document Server

    Jibril, Mubarak; Ahmed, Mohammed Zaki; Tjhai, Cen

    2010-01-01

    Algebraic geometry codes or Goppa codes are defined with places of degree one. In constructing generalised algebraic geometry codes places of higher degree are used. In this paper we present 41 new codes over GF(16) which improve on the best known codes of the same length and rate. The construction method uses places of small degree with a technique originally published over 10 years ago for the construction of generalised algebraic geometry codes.

  9. A Kind of Quasi-Orthogonal Space-Time Block Codes and its Decoding Methods

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    It is well known that it is impossible for complex orthogonal space-time block codes with full diversity and full rate to have more than two transmit antennas while non-orthogonal designs will lose the simplicity of maximum likelihxd decoding at receivers. In this paper, we propose a new quasi-orthogonal space-time block code. The code is quasi-orthogonal and can reduce the decoding complexity significantly by employing zero-forced and minimum mean squared error criteria.This paper also presents simulation results of two examples with three and four transmit antennas respectively.

  10. Two Phase Flow Models and Numerical Methods of the Commercial CFD Codes

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Jeong, Jae Jun; Chang, Seok Kyu; Cho, Hyung Kyu

    2007-11-15

    The use of commercial CFD codes extend to various field of engineering. The thermal hydraulic analysis is one of the promising engineering field of application of the CFD codes. Up to now, the main application of the commercial CFD code is focused within the single phase, single composition fluid dynamics. Nuclear thermal hydraulics, however, deals with abrupt pressure changes, high heat fluxes, and phase change heat transfer. In order to overcome the CFD limitation and to extend the capability of the nuclear thermal hydraulics analysis, the research efforts are made to collaborate the CFD and nuclear thermal hydraulics. To achieve the final goal, the current useful model and correlations used in commercial CFD codes should be reviewed and investigated. This report gives the summary information about the constitutive relationships that are used in the FLUENT, STAR-CD, and CFX. The brief information of the solution technologies are also enveloped.

  11. Pathway Detection from Protein Interaction Networks and Gene Expression Data Using Color-Coding Methods and A* Search Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yu Yeh

    2012-01-01

    Full Text Available With the large availability of protein interaction networks and microarray data supported, to identify the linear paths that have biological significance in search of a potential pathway is a challenge issue. We proposed a color-coding method based on the characteristics of biological network topology and applied heuristic search to speed up color-coding method. In the experiments, we tested our methods by applying to two datasets: yeast and human prostate cancer networks and gene expression data set. The comparisons of our method with other existing methods on known yeast MAPK pathways in terms of precision and recall show that we can find maximum number of the proteins and perform comparably well. On the other hand, our method is more efficient than previous ones and detects the paths of length 10 within 40 seconds using CPU Intel 1.73GHz and 1GB main memory running under windows operating system.

  12. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000

    Energy Technology Data Exchange (ETDEWEB)

    Delchini, Marc-Olivier [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division

    2016-04-29

    This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high-order spectral element CFD code developed at Argonne National Laboratory for high-resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds-averaged Navier-Stokes (URANS) simulations.

  13. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000

    Energy Technology Data Exchange (ETDEWEB)

    Delchini, Marc-Olivier [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division

    2016-04-29

    This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high order spectral element CFD code developed at Argonne National Laboratory for high resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds averaged Navier-Stokes (URANS) simulations.

  14. Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)

    Science.gov (United States)

    2016-05-01

    the proposition of a weight for averaging CDMA codes. This weighting function is referred in this discussion as the probability of the code matrix...Given a likelihood function of a multivariate Gaussian stochastic process (12), one can assume the values L and U and try to estimate the parameters...such as the average of the exponential functions were formulated. Averaging over a weight that depends on the TSC behaves as a filtering process where

  15. Coherent Synchrotron Radiation A Simulation Code Based on the Non-Linear Extension of the Operator Splitting Method

    CERN Document Server

    Dattoli, Giuseppe

    2005-01-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high intensity electron accelerators. A code devoted to the analysis of this type of problems should be fast and reliable: conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problem in accelerators. The extension of these method to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique, using exponential operators implemented numerically in C++. We show that the integration procedure is capable of reproducing the onset of an instability and effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, parametric studies a...

  16. Prediction Method for Image Coding Quality Based on Differential Information Entropy

    Directory of Open Access Journals (Sweden)

    Xin Tian

    2014-02-01

    Full Text Available For the requirement of quality-based image coding, an approach to predict the quality of image coding based on differential information entropy is proposed. First of all, some typical prediction approaches are introduced, and then the differential information entropy is reviewed. Taking JPEG2000 as an example, the relationship between differential information entropy and the objective assessment indicator PSNR at a fixed compression ratio is established via data fitting, and the constraint for fitting is to minimize the average error. Next, the relationship among differential information entropy, compression ratio and PSNR at various compression ratios is constructed and this relationship is used as an indicator to predict the image coding quality. Finally, the proposed approach is compared with some traditional approaches. From the experiments, it can be seen that the differential information entropy has a better linear relationship with image coding quality than that with the image activity. Therefore, the conclusion can be reached that the proposed approach is capable of predicting image coding quality at low compression ratios with small errors, and can be widely applied in a variety of real-time space image coding systems for its simplicity.

  17. A new multi-dimensional general relativistic neutrino hydrodynamics code for core-collapse supernovae. I. Method and code tests in spherical symmetry

    CERN Document Server

    Mueller, B; Dimmelmeier, H

    2010-01-01

    We present a new general relativistic (GR) code for hydrodynamic supernova simulations with neutrino transport in spherical and azimuthal symmetry (1D/2D). The code is a combination of the CoCoNuT hydro module, which is a Riemann-solver based, high-resolution shock-capturing method, and the three-flavor, energy-dependent neutrino transport scheme VERTEX. VERTEX integrates the neutrino moment equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the ray-by-ray plus approximation in 2D, assuming the neutrino distribution to be axially symmetric around the radial direction, and thus the neutrino flux to be radial. Our spacetime treatment employs the ADM 3+1 formalism with the conformal flatness condition for the spatial three-metric. This approach is exact in 1D and has been shown to yield very accurate results also for rotational stellar collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian...

  18. Life With and Without Coding: Two Methods for Early-Stage Data Analysis in Qualitative Research Aiming at Causal Explanations

    Directory of Open Access Journals (Sweden)

    Jochen Gläser

    2013-03-01

    Full Text Available Qualitative research aimed at "mechanismic" explanations poses specific challenges to qualitative data analysis because it must integrate existing theory with patterns identified in the data. We explore the utilization of two methods—coding and qualitative content analysis—for the first steps in the data analysis process, namely "cleaning" and organizing qualitative data. Both methods produce an information base that is structured by categories and can be used in the subsequent search for patterns in the data and integration of these patterns into a systematic, theoretically embedded explanation. Used as a stand-alone method outside the grounded theory approach, coding leads to an indexed text, i.e. both the original text and the index (the system of codes describing the content of text segments are subjected to further analysis. Qualitative content analysis extracts the relevant information, i.e. separates it from the original text, and processes only this information. We suggest that qualitative content analysis has advantages compared to coding whenever the research question is embedded in prior theory and can be answered without processing knowledge about the form of statements and their position in the text, which usually is the case in the search for "mechanismic" explanations. Coding outperforms qualitative content analysis in research that needs this information in later stages of the analysis, e.g. the exploration of meaning or the study of the construction of narratives. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs130254

  19. PCR-free quantitative detection of genetically modified organism from raw materials. An electrochemiluminescence-based bio bar code method.

    Science.gov (United States)

    Zhu, Debin; Tang, Yabing; Xing, Da; Chen, Wei R

    2008-05-15

    A bio bar code assay based on oligonucleotide-modified gold nanoparticles (Au-NPs) provides a PCR-free method for quantitative detection of nucleic acid targets. However, the current bio bar code assay requires lengthy experimental procedures including the preparation and release of bar code DNA probes from the target-nanoparticle complex and immobilization and hybridization of the probes for quantification. Herein, we report a novel PCR-free electrochemiluminescence (ECL)-based bio bar code assay for the quantitative detection of genetically modified organism (GMO) from raw materials. It consists of tris-(2,2'-bipyridyl) ruthenium (TBR)-labeled bar code DNA, nucleic acid hybridization using Au-NPs and biotin-labeled probes, and selective capture of the hybridization complex by streptavidin-coated paramagnetic beads. The detection of target DNA is realized by direct measurement of ECL emission of TBR. It can quantitatively detect target nucleic acids with high speed and sensitivity. This method can be used to quantitatively detect GMO fragments from real GMO products.

  20. Huffman decoding module based on the hardware and software co-design%基于软硬件协同设计的Huffman解码模块

    Institute of Scientific and Technical Information of China (English)

    刘华; 刘卫东; 邢文峰

    2011-01-01

    With the rapid development of multi-media technology,digital audio technology also developed rapidly.MP3 is a lossy audio compression format with high compression rate,at present in many fields it has begun to widely used with good market prospects.This paper is mainly based on hardware and software co-design approach to implement the Huffman decoding module of MP3.The solution proposed in this paper can not only efficiently realize the Huffman decoding module of MP3 but also be applied to the Huffman decoding module of WMA,AAC and other audio formats,The approach ensure efficient while taking into account the module's versatility.%随着多媒体技术的迅猛发展,数字音频技术也快速发展起来。MP3是一种有损音频压缩编码,其压缩程度很高,目前在很多领域已经开始广泛应用,具有良好的市场前景。主要基于软硬件协同设计的方法,实现MP3的Huff-man解码模块。提出的解决方案不仅可以实现对MP3的Huffman模块的高效解码,同样也可以应用于WMA、AAC等其他音频格式的Huffman模块,在保证高效的同时,兼顾了模块的通用性。

  1. Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC): User Guide. Version 3

    Science.gov (United States)

    Arnold, S. M.; Bednarcyk, B. A.; Wilt, T. E.; Trowbridge, D.

    1999-01-01

    The ability to accurately predict the thermomechanical deformation response of advanced composite materials continues to play an important role in the development of these strategic materials. Analytical models that predict the effective behavior of composites are used not only by engineers performing structural analysis of large-scale composite components but also by material scientists in developing new material systems. For an analytical model to fulfill these two distinct functions it must be based on a micromechanics approach which utilizes physically based deformation and life constitutive models and allows one to generate the average (macro) response of a composite material given the properties of the individual constituents and their geometric arrangement. Here the user guide for the recently developed, computationally efficient and comprehensive micromechanics analysis code, MAC, who's predictive capability rests entirely upon the fully analytical generalized method of cells, GMC, micromechanics model is described. MAC/ GMC is a versatile form of research software that "drives" the double or triply periodic micromechanics constitutive models based upon GMC. MAC/GMC enhances the basic capabilities of GMC by providing a modular framework wherein 1) various thermal, mechanical (stress or strain control) and thermomechanical load histories can be imposed, 2) different integration algorithms may be selected, 3) a variety of material constitutive models (both deformation and life) may be utilized and/or implemented, and 4) a variety of fiber architectures (both unidirectional, laminate and woven) may be easily accessed through their corresponding representative volume elements contained within the supplied library of RVEs or input directly by the user, and 5) graphical post processing of the macro and/or micro field quantities is made available.

  2. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    Science.gov (United States)

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  3. Design check against the construction code (DNV 2012) of an offshore pipeline using numerical methods

    Science.gov (United States)

    Stan, L. C.; Călimănescu, I.; Velcea, D. D.

    2016-08-01

    The production of oil and gas from offshore oil fields is, nowadays, more and more important. As a result of the increasing demand of oil, and being the shallow water reserves not enough, the industry is pushed forward to develop and exploit more difficult fields in deeper waters. In this paper, there will be deployed the new design code DNV 2012 in terms of checking an offshore pipeline as compliance with the requests of this new construction code, using the Bentley Autopipe V8i. The August 2012 revision of DNV offshore standard, DNV- OS-F101, Submarine Pipeline Systems is supported by AutoPIPE version 9.6. This paper provides a quick walk through for entering input data, analyzing and generating code compliance reports for a model with piping code selected as DNV Offshore 2012. As seen in the present paper, the simulations comprise geometrically complex pipeline subjected to various and variable loading conditions. At the end of the designing process the Engineer has to answer to a simple question: is that pipeline safe or not? The pipeline set as an example, has some sections that are not complying in terms of size and strength with the code DNV 2012 offshore pipelines. Obviously those sections have to be redesigned in a manner to meet those conditions.

  4. U.S. Sodium Fast Reactor Codes and Methods: Current Capabilities and Path Forward

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, A. J.; Fanning, T. H.

    2017-06-26

    The United States has extensive experience with the design, construction, and operation of sodium cooled fast reactors (SFRs) over the last six decades. Despite the closure of various facilities, the U.S. continues to dedicate research and development (R&D) efforts to the design of innovative experimental, prototype, and commercial facilities. Accordingly, in support of the rich operating history and ongoing design efforts, the U.S. has been developing and maintaining a series of tools with capabilities that envelope all facets of SFR design and safety analyses. This paper provides an overview of the current U.S. SFR analysis toolset, including codes such as SAS4A/SASSYS-1, MC2-3, SE2-ANL, PERSENT, NUBOW-3D, and LIFE-METAL, as well as the higher-fidelity tools (e.g. PROTEUS) being integrated into the toolset. Current capabilities of the codes are described and key ongoing development efforts are highlighted for some codes.

  5. An Efficient Segmental Bus-Invert Coding Method for Instruction Memory Data Bus Switching Reduction

    Directory of Open Access Journals (Sweden)

    Gu Ji

    2009-01-01

    Full Text Available Abstract This paper presents a bus coding methodology for the instruction memory data bus switching reduction. Compared to the existing state-of-the-art multiway partial bus-invert (MPBI coding which relies on data bit correlation, our approach is very effective in reducing the switching activity of the instruction data buses, since little bit correlation can be observed in the instruction data. Our experiments demonstrate that the proposed encoding can reduce up to 42% of switching activity, with an average of 30% reduction, while MPBI achieves just 17.6% reduction in switching activity.

  6. Development of a computer code for neutronic calculations of a hexagonal lattice of nuclear reactor using the flux expansion nodal method

    Directory of Open Access Journals (Sweden)

    Mohammadnia Meysam

    2013-01-01

    Full Text Available The flux expansion nodal method is a suitable method for considering nodalization effects in node corners. In this paper we used this method to solve the intra-nodal flux analytically. Then, a computer code, named MA.CODE, was developed using the C# programming language. The code is capable of reactor core calculations for hexagonal geometries in two energy groups and three dimensions. The MA.CODE imports two group constants from the WIMS code and calculates the effective multiplication factor, thermal and fast neutron flux in three dimensions, power density, reactivity, and the power peaking factor of each fuel assembly. Some of the code's merits are low calculation time and a user friendly interface. MA.CODE results showed good agreement with IAEA benchmarks, i. e. AER-FCM-101 and AER-FCM-001.

  7. Improved EZW image coding algorithm in air traffic control system.%空管系统中运用EZW图像编码算法的优化策略

    Institute of Scientific and Technical Information of China (English)

    胡波; 杨红雨

    2011-01-01

    In order to efficiently transmit the picture data in the air traffic control system, an improved image coding algoRithm, which is based on Embedded Zero-tree Wavelet(EZW),is presented. At fist, the lowest frequency sub-band is coded without loss. Then, based on the human visual system, different sub-bands at the same level based on different perceptual weights are merged. In the end. Huffman coding is used for EZW result. The experiment result shows that this method is better than the original EZW algorithm.%为了在空管系统中高效率传输景象数据,在嵌入式零树小波图像编码算法(EZW)的基础上进行了按步骤的整体优化,对低频子带进行无损编码,对同一级的不同子带根据人眼视觉特性进行加权合并,用Huffman编码对符号流进行二次编码,实验结果表明,该方法要优于传统的EZW算法.

  8. A New Realistic Evaluation Analysis Method: Linked Coding of Context, Mechanism, and Outcome Relationships

    Science.gov (United States)

    Jackson, Suzanne F.; Kolla, Gillian

    2012-01-01

    In attempting to use a realistic evaluation approach to explore the role of Community Parents in early parenting programs in Toronto, a novel technique was developed to analyze the links between contexts (C), mechanisms (M) and outcomes (O) directly from experienced practitioner interviews. Rather than coding the interviews into themes in terms of…

  9. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    Science.gov (United States)

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  10. Context-based lossless image compression with optimal codes for discretized Laplacian distributions

    Science.gov (United States)

    Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin

    2003-05-01

    Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.

  11. Systematic analysis of coding and noncoding DNA sequences using methods of statistical linguistics

    Science.gov (United States)

    Mantegna, R. N.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1995-01-01

    We compare the statistical properties of coding and noncoding regions in eukaryotic and viral DNA sequences by adapting two tests developed for the analysis of natural languages and symbolic sequences. The data set comprises all 30 sequences of length above 50 000 base pairs in GenBank Release No. 81.0, as well as the recently published sequences of C. elegans chromosome III (2.2 Mbp) and yeast chromosome XI (661 Kbp). We find that for the three chromosomes we studied the statistical properties of noncoding regions appear to be closer to those observed in natural languages than those of coding regions. In particular, (i) a n-tuple Zipf analysis of noncoding regions reveals a regime close to power-law behavior while the coding regions show logarithmic behavior over a wide interval, while (ii) an n-gram entropy measurement shows that the noncoding regions have a lower n-gram entropy (and hence a larger "n-gram redundancy") than the coding regions. In contrast to the three chromosomes, we find that for vertebrates such as primates and rodents and for viral DNA, the difference between the statistical properties of coding and noncoding regions is not pronounced and therefore the results of the analyses of the investigated sequences are less conclusive. After noting the intrinsic limitations of the n-gram redundancy analysis, we also briefly discuss the failure of the zeroth- and first-order Markovian models or simple nucleotide repeats to account fully for these "linguistic" features of DNA. Finally, we emphasize that our results by no means prove the existence of a "language" in noncoding DNA.

  12. Improved intra-block copy and motion search methods for screen content coding

    Science.gov (United States)

    Rapaka, Krishna; Pang, Chao; Sole, Joel; Karczewicz, Marta; Li, Bin; Xu, Jizheng

    2015-09-01

    Screen content video coding extension of HEVC (SCC) is being developed by Joint Collaborative Team on Video Coding (JCT-VC) of ISO/IEC MPEG and ITU-T VCEG. Screen content usually features a mix of camera captured content and a significant proportion of rendered graphics, text, or animation. These two types of content exhibit distinct characteristics requiring different compression scheme to achieve better coding efficiency. This paper presents an efficient block matching schemes for coding screen content to better capture the spatial and temporal characteristics. The proposed schemes are mainly categorized as a) hash based global region block matching for intra block copy b) selective search based local region block matching for inter frame prediction c) hash based global region block matching for inter frame prediction. In the first part, a hash-based full frame block matching algorithm is designed for intra block copy to handle the repeating patterns and large motions when the reference picture constituted already decoded samples of the current picture. In the second part, a selective local area block matching algorithm is designed for inter motion estimation to handle sharp edges, high spatial frequencies and non-monotonic error surface. In the third part, a hash based full frame block matching algorithm is designed for inter motion estimation to handle repeating patterns and large motions across the temporal reference picture. The proposed schemes are compared against HM-13.0+RExt-6.0, which is the state-of-art screen content coding. The first part provides a luma BD-rate gains of -26.6%, -15.6%, -11.4% for AI, RA and LD TGM configurations. The second part provides a luma BD-rate gains of -10.1%, -12.3% for RA and LD TGM configurations. The third part provides a luma BD-rate gains of -12.2%, -11.5% for RA and LD TGM configurations.

  13. Systematic analysis of coding and noncoding DNA sequences using methods of statistical linguistics

    Science.gov (United States)

    Mantegna, R. N.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1995-01-01

    We compare the statistical properties of coding and noncoding regions in eukaryotic and viral DNA sequences by adapting two tests developed for the analysis of natural languages and symbolic sequences. The data set comprises all 30 sequences of length above 50 000 base pairs in GenBank Release No. 81.0, as well as the recently published sequences of C. elegans chromosome III (2.2 Mbp) and yeast chromosome XI (661 Kbp). We find that for the three chromosomes we studied the statistical properties of noncoding regions appear to be closer to those observed in natural languages than those of coding regions. In particular, (i) a n-tuple Zipf analysis of noncoding regions reveals a regime close to power-law behavior while the coding regions show logarithmic behavior over a wide interval, while (ii) an n-gram entropy measurement shows that the noncoding regions have a lower n-gram entropy (and hence a larger "n-gram redundancy") than the coding regions. In contrast to the three chromosomes, we find that for vertebrates such as primates and rodents and for viral DNA, the difference between the statistical properties of coding and noncoding regions is not pronounced and therefore the results of the analyses of the investigated sequences are less conclusive. After noting the intrinsic limitations of the n-gram redundancy analysis, we also briefly discuss the failure of the zeroth- and first-order Markovian models or simple nucleotide repeats to account fully for these "linguistic" features of DNA. Finally, we emphasize that our results by no means prove the existence of a "language" in noncoding DNA.

  14. Research on compression and improvement of vertex chain code

    Science.gov (United States)

    Yu, Guofang; Zhang, Yujie

    2009-10-01

    Combined with the Huffman encoding theory, the code 2 with highest emergence-probability and continution-frequency is indicated by a binary number 0,the combination of 1 and 3 with higher emergence-probability and continutionfrequency are indicated by two binary number 10,and the corresponding frequency-code are attached to the two kinds of code,the length of the frequency-code can be assigned beforehand or adaptive automatically,the code 1 and 3 with lowest emergence-probability and continution-frequency are indicated by the binary number 110 and 111 respectively.The relative encoding efficiency and decoding efficiency are supplemented to the current performance evaluation system of the chain code.the new chain code is compared with a current chain code through the test system progamed by VC++, the results show that the basic performances of the new chain code are significantly improved, and the performance advantages are improved with the size increase of the graphics.

  15. Research on universal combinatorial coding.

    Science.gov (United States)

    Lu, Jun; Zhang, Zhuo; Mo, Juan

    2014-01-01

    The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value.

  16. Correcting for telluric absorption: Methods, case studies, and release of the TelFit code

    Energy Technology Data Exchange (ETDEWEB)

    Gullikson, Kevin; Kraus, Adam [Department of Astronomy, University of Texas, 2515 Speedway, Stop C1400, Austin, TX 78712 (United States); Dodson-Robinson, Sarah [Department of Physics and Astronomy, 217 Sharp Lab, Newark, DE 19716 (United States)

    2014-09-01

    Ground-based astronomical spectra are contaminated by the Earth's atmosphere to varying degrees in all spectral regions. We present a Python code that can accurately fit a model to the telluric absorption spectrum present in astronomical data, with residuals of ∼3%-5% of the continuum for moderately strong lines. We demonstrate the quality of the correction by fitting the telluric spectrum in a nearly featureless A0V star, HIP 20264, as well as to a series of dwarf M star spectra near the 819 nm sodium doublet. We directly compare the results to an empirical telluric correction of HIP 20264 and find that our model-fitting procedure is at least as good and sometimes more accurate. The telluric correction code, which we make freely available to the astronomical community, can be used as a replacement for telluric standard star observations for many purposes.

  17. GTNEUT: A code for the calculation of neutral particle transport in plasmas based on the Transmission and Escape Probability method

    Science.gov (United States)

    Mandrekas, John

    2004-08-01

    GTNEUT is a two-dimensional code for the calculation of the transport of neutral particles in fusion plasmas. It is based on the Transmission and Escape Probabilities (TEP) method and can be considered a computationally efficient alternative to traditional Monte Carlo methods. The code has been benchmarked extensively against Monte Carlo and has been used to model the distribution of neutrals in fusion experiments. Program summaryTitle of program: GTNEUT Catalogue identifier: ADTX Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTX Computer for which the program is designed and others on which it has been tested: The program was developed on a SUN Ultra 10 workstation and has been tested on other Unix workstations and PCs. Operating systems or monitors under which the program has been tested: Solaris 8, 9, HP-UX 11i, Linux Red Hat v8.0, Windows NT/2000/XP. Programming language used: Fortran 77 Memory required to execute with typical data: 6 219 388 bytes No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.: 300 709 No. of lines in distributed program, including test data, etc.: 17 365 Distribution format: compressed tar gzip file Keywords: Neutral transport in plasmas, Escape probability methods Nature of physical problem: This code calculates the transport of neutral particles in thermonuclear plasmas in two-dimensional geometric configurations. Method of solution: The code is based on the Transmission and Escape Probability (TEP) methodology [1], which is part of the family of integral transport methods for neutral particles and neutrons. The resulting linear system of equations is solved by standard direct linear system solvers (sparse and non-sparse versions are included). Restrictions on the complexity of the problem: The current version of the code can

  18. The Digital Encryption Method of the Webpage Code%网页代码数字加密法

    Institute of Scientific and Technical Information of China (English)

    瞿波

    2013-01-01

      介绍了一种利用JavaScript函数将网页源代码转变成数字代码的加密方法,即网页代码数字加密法。该加密法既能保证网页在浏览器中正常的显示,又十分巧妙的对网页源代码进行了保护,具有较高的实用价值。在对该加密法的原理做了详细说明的基础上,给出了该加密法的源程序。%This paper has recommended one kind the source code of webpage into digital code using JavaScript function. This method of digital encryption methods that changes can guarantee the normal showing in the browser of webpages,also can protect the source code of the webpage again, have relatively high practical value. This text has been done on the foundation of elaboration in the principle to this encryption method, also give the source program which has offered this encryption method.

  19. A Method to Assign Spread Codes Based on Passive RFID Communication for Energy Harvesting Wireless Sensors Using Spread Spectrum Transmission

    Directory of Open Access Journals (Sweden)

    Ken Takahashi

    2015-08-01

    Full Text Available Considerable research has been conducted on systems that collect real-world information by using numerous energy harvesting wireless sensors. The sensors need to be tiny, cheap, and consume ultra-low energy. However, such sensors have some functional limits, including being restricted to wireless communication transmission. Therefore, when more than one sensor simultaneously transmits information in these systems, the receiver may not be able to demodulate if the sensors cannot accommodate multiple access. To solve this problem, a number of proposals have been made based on spread spectrum technologies for resistance to interference. In this paper, we point out some problems regarding the application of such sensors, and explain the assumption of spread codes assignment based on passive radio frequency identification (RFID communication. During the spread codes assignment, the system cannot work. Hence, efficient assignment method is more appropriate. We consider two assignment methods and assessed them in terms of total assignment time through an experiment. The results show the total assignment time in case of Electronic Product Code (EPC Global Class-1 Generation-2 which is an international standard for wireless protocols and the relationship between the ratio of the time taken by the read/write command and the ratio of total assignment time by the two methods. This implies that more efficient methods are obtained by considering the time ratio of read/write command.

  20. Method for computing self-consistent solution in a gun code

    Science.gov (United States)

    Nelson, Eric M

    2014-09-23

    Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.

  1. Source Code Plagiarism Detection Method Using Protégé Built Ontologies

    OpenAIRE

    Ion SMEUREANU; Bogdan IANCU

    2013-01-01

    Software plagiarism is a growing and serious problem that affects computer science universities in particular and the quality of education in general. More and more students tend to copy their thesis’s software from older theses or internet databases. Checking source codes manually, to detect if they are similar or the same, is a laborious and time consuming job, maybe even impossible due to existence of large digital repositories. Ontology is a way of describing a document’s semantic, so it ...

  2. Euler Technology Assessment program for preliminary aircraft design employing SPLITFLOW code with Cartesian unstructured grid method

    Science.gov (United States)

    Finley, Dennis B.

    1995-01-01

    This report documents results from the Euler Technology Assessment program. The objective was to evaluate the efficacy of Euler computational fluid dynamics (CFD) codes for use in preliminary aircraft design. Both the accuracy of the predictions and the rapidity of calculations were to be assessed. This portion of the study was conducted by Lockheed Fort Worth Company, using a recently developed in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages for this study, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaptation of the volume grid during the solution convergence to resolve high-gradient flow regions. This proved beneficial in resolving the large vortical structures in the flow for several configurations examined in the present study. The SPLITFLOW code predictions of the configuration forces and moments are shown to be adequate for preliminary design analysis, including predictions of sideslip effects and the effects of geometry variations at low and high angles of attack. The time required to generate the results from initial surface definition is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  3. Portable implementation of implicit methods for the UEDGE and BOUT codes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Rognlien, T D; Xu, X Q

    1999-02-17

    A description is given of the parallelization algorithms and results for two codes used ex- tensively to model edge-plasmas in magnetic fusion energy devices. The codes are UEDGE, which calculates two-dimensional plasma and neutral gas profiles, and BOUT, which cal- culates three-dimensional plasma turbulence using experimental or UEDGE profiles. Both codes describe the plasma behavior using fluid equations. A domain decomposition model is used for parallelization by dividing the global spatial simulation region into a set of domains. This approach allows the used of two recently developed LLNL Newton-Krylov numerical solvers, PVODE and KINSOL. Results show an order of magnitude speed up in execution time for the plasma equations with UEDGE. A problem which is identified for UEDGE is the solution of the fluid gas equations on a highly anisotropic mesh. The speed up of BOUT is closer to two orders of magnitude, especially if one includes the initial improvement from switching to the fully implicit Newton-Krylov solver. The turbulent transport coefficients obtained from BOUT guide the use of anomalous transport models within UEDGE, with the eventual goal of a self-consistent coupling.

  4. Study on fault diagnosis method for nuclear power plant based on hadamard error-correcting output code

    Science.gov (United States)

    Mu, Y.; Sheng, G. M.; Sun, P. N.

    2017-05-01

    The technology of real-time fault diagnosis for nuclear power plants(NPP) has great significance to improve the safety and economy of reactor. The failure samples of nuclear power plants are difficult to obtain, and support vector machine is an effective algorithm for small sample problem. NPP is a very complex system, so in fact the type of NPP failure may occur very much. ECOC is constructed by the Hadamard error correction code, and the decoding method is Hamming distance method. The base models are established by lib-SVM algorithm. The result shows that this method can diagnose the faults of the NPP effectively.

  5. An Investigation of the Methods of Logicalizing the Code-Checking System for Architectural Design Review in New Taipei City

    Directory of Open Access Journals (Sweden)

    Wei-I Lee

    2016-12-01

    Full Text Available The New Taipei City Government developed a Code-checking System (CCS using Building Information Modeling (BIM technology to facilitate an architectural design review in 2014. This system was intended to solve problems caused by cognitive gaps between designer and reviewer in the design review process. Along with considering information technology, the most important issue for the system’s development has been the logicalization of literal building codes. Therefore, to enhance the reliability and performance of the CCS, this study uses the Fuzzy Delphi Method (FDM on the basis of design thinking and communication theory to investigate the semantic difference and cognitive gaps among participants in the design review process and to propose the direction of system development. Our empirical results lead us to recommend grouping multi-stage screening and weighted assisted logicalization of non-quantitative building codes to improve the operability of CCS. Furthermore, CCS should integrate the Expert Evaluation System (EES to evaluate the design value under qualitative building codes.

  6. A Method to Assess Robustness of GPS C/A Code in Presence of CW Interferences

    Directory of Open Access Journals (Sweden)

    Beatrice Motella

    2010-01-01

    Full Text Available Navigation/positioning platforms integrated with wireless communication systems are being used in a rapidly growing number of new applications. The mutual benefits they can obtain from each other are intrinsically related to the interoperability level and to a properly designed coexistence. In this paper a new family of curves, called Interference Error Envelope (IEE, is used to assess the impact of possible interference due to other systems (e.g., communications transmitting in close bandwidths to Global Navigation Satellite System (GNSS signals. The focus is on the analysis of the GPS C/A code robustness against Continuous Wave (CW interference.

  7. Raptor Code预编码技术研究%Research on Precoding Method in Raptor Code

    Institute of Scientific and Technical Information of China (English)

    孟庆春; 王晓京

    2007-01-01

    在介绍LT Code的基础上,进一步探讨了Raptor Code.预编码技术是Raptor Code采用的核心技术,该技术能够克服LT Code解码代价不固定的缺点,有鉴于该文分析了多层校验预编码技术,并以此为基础提出基于RS Code的改进方法.该方法具有解码率高等优点,适合解决网络传输的安全问题.

  8. Efficient image coding method based on adaptive Gabor discrete cosine transforms

    Science.gov (United States)

    Wang, Hang; Yan, Hong

    1993-01-01

    The Gabor transform is very useful for image compression, but its implementation is very complicated and time consuming because the Gabor elementary functions are not mutually orthogonal. An efficient algorithm that combines the successive overrelaxation iteration and the look-up table techniques can be used to carry out the Gabor transform. The performance of the Gabor transform can be improved by using a modified transform, a Gabor discrete cosine transform (DCT). We present an adaptive Gabor DCT image coding algorithm. Experimental results show that a better performance can be achieved with the adaptive Gabor DCT than with the Gabor DCT.

  9. 一种RS码快速盲识别方法%A last blind recognition method of RS codes

    Institute of Scientific and Technical Information of China (English)

    戚林; 郝士琦; 王磊; 王勇

    2011-01-01

    提出了一种RS码的快速盲识别方法.该方法基于RS码的等效二进制分组码的循环移位特性,通过欧几里德算法计算循环移位前后码字的最大公约式,根据最大公约式指数的相关性来估计码长,并快速剔除含错码字,进而利用伽罗华域的傅里叶变换(Galois Field Fourier Transform,GFFT)实现RS码的本原多项式和生成多项式的识别.仿真结果表明,该算法复杂度低,计算量小,在误码率为10-3的情况下,对RS码的识别概率高于90%.%A fast blind recognition method of RS codes is proposed. The circular shift property of equivalent binary block codes of RS codes is taken into account in the method. By using Euclidean algorithm between RS codeword and its circular shift codeword, the greatest common divisor is obtained. The RS codes length is estimated and the error codeword is eliminated fast by the correlation character of the index of the greatest common divisor. Then the primitive polynomial and generator polynomial are recognized using Galois Field Fourier Transform (GFFT). The simulation experiments show that the proposed method has less running time and low complicacy and the recognition probability is above 90% at a bit error rate of 1×10-3.

  10. Comparison of direct and quasi-static methods for neutron kinetic calculations with the EDF R and D COCAGNE code

    Energy Technology Data Exchange (ETDEWEB)

    Girardi, E.; Guerin, P. [Electricite de France - RandD, 1 av. du General de Gaulle, 92141, Clamart (France); Dulla, S.; Nervo, M.; Ravetto, P. [Dipartimento di Energetica, Politecnico di Torino, 24, c.so Duca degli Abruzzi, 10129, Torino (Italy)

    2012-07-01

    Quasi-Static (QS) methods are quite popular in the reactor physics community and they exhibit two main advantages. First, these methods overcome both the limits of the Point Kinetic (PK) approach and the issues of the computational effort related to the direct discretization of the time-dependent neutron transport equation. Second, QS methods can be implemented in such a way that they can be easily coupled to very different external spatial solvers. In this paper, the results of the coupling between the QS methods developed by Politecnico di Torino and the EDF R and D core code COCAGNE are presented. The goal of these activities is to evaluate the performances of QS methods (in term of computational cost and precision) with respect to the direct kinetic solver (e.g. {theta}-scheme) already available in COCAGNE. Additionally, they allow to perform an extensive cross-validation of different kinetic models (QS and direct methods). (authors)

  11. Comparison of methods for auto-coding causation of injury narratives.

    Science.gov (United States)

    Bertke, S J; Meyers, A R; Wurzelbacher, S J; Measure, A; Lampl, M P; Robins, D

    2016-03-01

    Manually reading free-text narratives in large databases to identify the cause of an injury can be very time consuming and recently, there has been much work in automating this process. In particular, the variations of the naïve Bayes model have been used to successfully auto-code free text narratives describing the event/exposure leading to the injury of a workers' compensation claim. This paper compares the naïve Bayes model with an alternative logistic model and found that this new model outperformed the naïve Bayesian model. Further modest improvements were found through the addition of sequences of keywords in the models as opposed to consideration of only single keywords. The programs and weights used in this paper are available upon request to researchers without a training set wishing to automatically assign event codes to large data-sets of text narratives. The utility of sharing this program was tested on an outside set of injury narratives provided by the Bureau of Labor Statistics with promising results.

  12. Phase transfer function based method to alleviate image artifacts in wavefront coding imaging system

    Science.gov (United States)

    Mo, Xutao; Wang, Jinjiang

    2013-09-01

    Wavefront coding technique can extend the depth of filed (DOF) of the incoherent imaging system. Several rectangular separable phase masks (such as cubic type, exponential type, logarithmic type, sinusoidal type, rational type, et al) have been proposed and discussed, because they can extend the DOF up to ten times of the DOF of ordinary imaging system. But according to the research on them, researchers have pointed out that the images are damaged by the artifacts, which usually come from the non-linear phase transfer function (PTF) differences between the PTF used in the image restoration filter and the PTF related to real imaging condition. In order to alleviate the image artifacts in imaging systems with wavefront coding, an optimization model based on the PTF was proposed to make the PTF invariance with the defocus. Thereafter, an image restoration filter based on the average PTF in the designed depth of field was introduced along with the PTF-based optimization. The combination of the optimization and the image restoration proposed can alleviate the artifacts, which was confirmed by the imaging simulation of spoke target. The cubic phase mask (CPM) and exponential phase mask (EPM) were discussed as example.

  13. Development of breached pin performance analysis code SAFFRON (System of Analyzing Failed Fuel under Reactor Operation by Numerical method)

    Energy Technology Data Exchange (ETDEWEB)

    Ukai, Shigeharu [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1995-03-01

    On the assumption of fuel pin failure, the breached pin performance analysis code SAFFRON was developed to evaluate the fuel pin behavior in relation to the delayed neutron signal response during operational mode beyond the cladding failure. Following characteristic behavior in breached fuel pin is modeled in 3-dimensional finite element method : pellet swelling by fuel-sodium reaction, fuel temperature change, and resultant cladding breach extension and delayed neutron precursors release into coolant. Particularly, practical algorithm of numerical procedure in finite element method was originally developed in order to solve the 3-dimensional non-linear contact problem between the swollen pellet due to fuel-sodium reaction and breached cladding. (author).

  14. Method for Allocating Walsh Codes by Complete Group Information Walsh Code in CDMA Cellular System%一种在CDMA网状系统中通过完整分组信息分配Walsh码的方法

    Institute of Scientific and Technical Information of China (English)

    Waleej Haider; Seema Ansari; Muhammad Nouman Durrani

    2009-01-01

    A method for allocating Walsh codes by group in a CDMA(Code Division Multiple Access)cellular system is disclosed.The proposed system provides a method for grouping,allocating,removing and detecting of the minimum traffic group to minimize tlle time for allocating a call or transmitted data to an idle Walsh code.thereby,improving the performance of the system and reducing the time required to set up the call.The new concept of CGIWC has been presented tO solve the calls or data allocating and remoral from the Walsh Code.Preferably,these steps are performed by a BCS(Base station Call control Processor)at a CDMA base station.Moreover,a comparison with the previous work has been shown for the support of our related work.At the end,the future direction in which the related work can be employed,are highlighted.

  15. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  16. Analysis methods of safe Coulomb-excitation experiments with radioactive ion beams using the GOSIA code

    Energy Technology Data Exchange (ETDEWEB)

    Zielinska, M. [CEA Saclay, IRFU/SPhN, Gif-sur-Yvette (France); Gaffney, L.P. [KU Leuven, Instituut voor Kern- en Stralingsfysica, Leuven (Belgium); University of the West of Scotland, School of Engineering, Paisley (United Kingdom); Wrzosek-Lipska, K. [KU Leuven, Instituut voor Kern- en Stralingsfysica, Leuven (Belgium); University of Warsaw, Heavy Ion Laboratory, Warsaw (Poland); Clement, E. [GANIL, Caen Cedex (France); Grahn, T.; Pakarinen, J. [University of Jyvaskylae, Department of Physics, Jyvaskylae (Finland); University of Helsinki, Helsinki Institute of Physics, Helsinki (Finland); Kesteloot, N. [KU Leuven, Instituut voor Kern- en Stralingsfysica, Leuven (Belgium); SCK-CEN, Belgian Nuclear Research Centre, Mol (Belgium); Napiorkowski, P. [University of Warsaw, Heavy Ion Laboratory, Warsaw (Poland); Duppen, P. van [KU Leuven, Instituut voor Kern- en Stralingsfysica, Leuven (Belgium); Warr, N. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Darmstadt (Germany)

    2016-04-15

    With the recent advances in radioactive ion beam technology, Coulomb excitation at safe energies becomes an important experimental tool in nuclear-structure physics. The usefulness of the technique to extract key information on the electromagnetic properties of nuclei has been demonstrated since the 1960s with stable beam and target combinations. New challenges present themselves when studying exotic nuclei with this technique, including dealing with low statistics or number of data points, absolute and relative normalisation of the measured cross-sections and a lack of complementary experimental data, such as excited-state lifetimes and branching ratios. This paper addresses some of these common issues and presents analysis techniques to extract transition strengths and quadrupole moments utilising the least-squares fit code, GOSIA. (orig.)

  17. A novel Morse code-inspired method for multiclass motor imagery brain-computer interface (BCI) design.

    Science.gov (United States)

    Jiang, Jun; Zhou, Zongtan; Yin, Erwei; Yu, Yang; Liu, Yadong; Hu, Dewen

    2015-11-01

    Motor imagery (MI)-based brain-computer interfaces (BCIs) allow disabled individuals to control external devices voluntarily, helping us to restore lost motor functions. However, the number of control commands available in MI-based BCIs remains limited, limiting the usability of BCI systems in control applications involving multiple degrees of freedom (DOF), such as control of a robot arm. To address this problem, we developed a novel Morse code-inspired method for MI-based BCI design to increase the number of output commands. Using this method, brain activities are modulated by sequences of MI (sMI) tasks, which are constructed by alternately imagining movements of the left or right hand or no motion. The codes of the sMI task was detected from EEG signals and mapped to special commands. According to permutation theory, an sMI task with N-length allows 2 × (2(N)-1) possible commands with the left and right MI tasks under self-paced conditions. To verify its feasibility, the new method was used to construct a six-class BCI system to control the arm of a humanoid robot. Four subjects participated in our experiment and the averaged accuracy of the six-class sMI tasks was 89.4%. The Cohen's kappa coefficient and the throughput of our BCI paradigm are 0.88 ± 0.060 and 23.5bits per minute (bpm), respectively. Furthermore, all of the subjects could operate an actual three-joint robot arm to grasp an object in around 49.1s using our approach. These promising results suggest that the Morse code-inspired method could be used in the design of BCIs for multi-DOF control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. 一种CIM模型的Java代码生成方法%Java Code Generation Method of CIM Model

    Institute of Scientific and Technical Information of China (English)

    余永忠; 王永才

    2013-01-01

    介绍一种将CIM模型转换为Java代码的方法。 CIM模型是对电力企业的对象进行建模,是进行CIM应用的基础,将CIM模型转换为Java代码是为了实用的需要。对CIM模型及EMF框架进行简要的介绍,说明CIM模型通过转换为EMF模型从而生成Java代码的方法实现,为CIM模型的落地实用化提供参考。%Introduces a method of transforming CIM model to Java code. CIM model is the model of the power enterprise, is the basis for the CIM application, the CIM model into Java code for practi-cal needs. Gives a brief introduction of CIM model and EMF framework, from the CIM model to EMF model, and generate Java code, as the reference of the CIM model application.

  19. Benchmarking of the dose planning method (DPM) Monte Carlo code using electron beams from a racetrack microtron.

    Science.gov (United States)

    Chetty, Indrin J; Moran, Jean M; McShan, Daniel L; Fraass, Benedick A; Wilderman, Scott J; Bielajew, Alex F

    2002-06-01

    A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for dose calculations from 10 and 50 MeV scanned electron beams produced from a racetrack microtron. Central axis depth dose measurements and a series of profile scans at various depths were acquired in a water phantom using a Scanditronix type RK ion chamber. Source spatial distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber measurements carried out across the two-dimensional beam profile at 100 cm downstream from the source. The in-air spatial distributions were found to have full width at half maximum of 4.7 and 1.3 cm, at 100 cm from the source, for the 10 and 50 MeV beams, respectively. Energy spectra for the 10 and 50 MeV beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. DPM calculations are on average within +/- 2% agreement with measurement for all depth dose and profile comparisons conducted in this study. The accuracy of the DPM code illustrated in this work suggests that DPM may be used as a valuable tool for electron beam dose calculations.

  20. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  1. Application of wavelet filtering and Barker-coded pulse compression hybrid method to air-coupled ultrasonic testing

    Science.gov (United States)

    Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping

    2014-10-01

    Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.

  2. Measuring the implementation of codes of conduct. An assessment method based on a process approach of the responsible organisation

    NARCIS (Netherlands)

    Nijhof, André; Cludts, Stephan; Fisscher, Olaf; Laan, Albertus

    2003-01-01

    More and more organisations formulate a code of conduct in order to stimulate responsible behaviour among their members. Much time and energy is usually spent fixing the content of the code but many organisations get stuck in the challenge of implementing and maintaining the code. The code then turn

  3. Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels

    OpenAIRE

    Arikan, Erdal

    2008-01-01

    A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity $I(W)$ of any given binary-input discrete memoryless channel (B-DMC) $W$. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of $N$ independent copies of a given B-DMC $W$, a second set of $N$ binary-input channels $\\{W_N^{(i)...

  4. Motion estimation using low-band-shift method for wavelet-based moving-picture coding.

    Science.gov (United States)

    Park, H W; Kim, H S

    2000-01-01

    The discrete wavelet transform (DWT) has several advantages of multiresolution analysis and subband decomposition, which has been successfully used in image processing. However, the shift-variant property is intrinsic due to the decimation process of the wavelet transform, and it makes the wavelet-domain motion estimation and compensation inefficient. To overcome the shift-variant property, a low-band-shift method is proposed and a motion estimation and compensation method in the wavelet-domain is presented. The proposed method has a superior performance to the conventional motion estimation methods in terms of the mean absolute difference (MAD) as well as the subjective quality. The proposed method can be a model method for the motion estimation in wavelet-domain just like the full-search block matching in the spatial domain.

  5. A Simple Method for Static Load Balancing of Parallel FDTD Codes

    DEFF Research Database (Denmark)

    Franek, Ondrej

    2016-01-01

    A static method for balancing computational loads in parallel implementations of the finite-difference timedomain method is presented. The procedure is fairly straightforward and computationally inexpensive, thus providing an attractive alternative to optimization techniques. The method is descri......A static method for balancing computational loads in parallel implementations of the finite-difference timedomain method is presented. The procedure is fairly straightforward and computationally inexpensive, thus providing an attractive alternative to optimization techniques. The method...... is described for partitioning in a single mesh dimension, but it is shown that it can be adapted also for 2D and 3D partitioning in approximate way, with good results. It is applicable to both homogeneous and heterogeneous parallel architectures, and can also be used for balancing memory on distributed memory...

  6. Improved method for predicting the peak signal-to-noise ratio quality of decoded images in fractal image coding

    Science.gov (United States)

    Wang, Qiang; Bi, Sheng

    2017-01-01

    To predict the peak signal-to-noise ratio (PSNR) quality of decoded images in fractal image coding more efficiently and accurately, an improved method is proposed. After some derivations and analyses, we find that the linear correlation coefficients between coded range blocks and their respective best-matched domain blocks can determine the dynamic range of their collage errors, which can also provide the minimum and the maximum of the accumulated collage error (ACE) of uncoded range blocks. Moreover, the dynamic range of the actual percentage of accumulated collage error (APACE), APACEmin to APACEmax, can be determined as well. When APACEmin reaches a large value, such as 90%, APACEmin to APACEmax will be limited in a small range and APACE can be computed approximately. Furthermore, with ACE and the approximate APACE, the ACE of all range blocks and the average collage error (ACER) can be obtained. Finally, with the logarithmic relationship between ACER and the PSNR quality of decoded images, the PSNR quality of decoded images can be predicted directly. Experiments show that compared with the previous similar method, the proposed method can predict the PSNR quality of decoded images more accurately and needs less computation time simultaneously.

  7. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  8. A Novel Multi-Focus Image Fusion Method Based on Stochastic Coordinate Coding and Local Density Peaks Clustering

    Directory of Open Access Journals (Sweden)

    Zhiqin Zhu

    2016-11-01

    Full Text Available The multi-focus image fusion method is used in image processing to generate all-focus images that have large depth of field (DOF based on original multi-focus images. Different approaches have been used in the spatial and transform domain to fuse multi-focus images. As one of the most popular image processing methods, dictionary-learning-based spare representation achieves great performance in multi-focus image fusion. Most of the existing dictionary-learning-based multi-focus image fusion methods directly use the whole source images for dictionary learning. However, it incurs a high error rate and high computation cost in dictionary learning process by using the whole source images. This paper proposes a novel stochastic coordinate coding-based image fusion framework integrated with local density peaks. The proposed multi-focus image fusion method consists of three steps. First, source images are split into small image patches, then the split image patches are classified into a few groups by local density peaks clustering. Next, the grouped image patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP algorithm is used to carry out sparse representation. After the three steps, the obtained sparse coefficients are fused following the max L1-norm rule. The fused coefficients are inversely transformed to an image by using the learned dictionary. The results and analyses of comparison experiments demonstrate that fused images of the proposed method have higher qualities than existing state-of-the-art methods.

  9. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  10. IPv6下基于Huffman编码的路径回溯算法研究%Research of IPv6 path reconstruction algorithm based on Huffman code

    Institute of Scientific and Technical Information of China (English)

    胡清钟; 张斌

    2013-01-01

    包标记算法是一种常用的IP回溯算法,该算法把路径信息标记到IP报头的标记区域中,可以根据标记包中的标记信息重构出攻击路径,从而追踪到攻击的源头.由于标记空间大小的限制,标记信息有限,往往需要多个标记包才能重构出一条攻击路径,路径重构算法的复杂度较高,效率和准确率较低.为了解决这一问题,提出一种基于Huffman编码的路径回溯算法,将与上一跳路由器相关的链路信息以Huffman编码方式标记到标记区域,且不需将标记信息转存在中间节点.该算法适用于IPv6网络,仅需一个标记包就能准确地重构出攻击路径.实验结果表明,本文提出的算法在重构路径时速度快、效率和准确率高.

  11. Compressed Technology Based on Huffman Code by Java%基于Huffman编码的压缩技术的Java实现

    Institute of Scientific and Technical Information of China (English)

    陈旭辉; 范肖南; 巩天宁

    2008-01-01

    当前,广泛采用的无损压缩技术主要有2种,一种是短语式压缩,另一种是编码式压缩.本文介绍采用java编程语言利用Huffman算法实现文件的压缩功能,是实现的编码式压缩技术.

  12. 基于Huffman编码的文本信息隐藏算法%Algorithm of Text Information Hiding Based on Huffman Coding

    Institute of Scientific and Technical Information of China (English)

    戴祖旭; 洪帆; 董洁

    2007-01-01

    自然语言句子可以变换为词性标记串或句型.该文提出了基于句型Huffman编码的信息隐藏算法,根据句型分布构造Huffman编码,秘密信息解码为句型.句型在载体文本中的位置是密钥,对句型作Huffman压缩编码即可提取秘密信息,给出了信息隐藏容量公式.该算法不需要修改载体文本.

  13. Design of Experiment Teaching Platform for Huffman coding Based on MATLAB%基于MATLAB的Huffman编码实验教学平台设计

    Institute of Scientific and Technical Information of China (English)

    李荣

    2015-01-01

    针对Huffman编码实验教学中的有关计算问题,本文利用MATLAB的图形用户界面,设计开发了一个简单实用的实验教学平台.该平台实现了理论和实验相结合,为Huffman编码的实验教学提供了一个有效的工具.

  14. 比较应用STL实现Huffman编码的两种方法%Comparing Two Ways about Programming of Huffman Coding with STL

    Institute of Scientific and Technical Information of China (English)

    孙宏; 章小莉; 赵越

    2010-01-01

    Huffman编码作为信息不丢失压缩方法在现代通信、多媒体技术等领域广泛运用.研究用C++的标准模板库STL实现Huffman编码算法具有现实意义.本文讨论用STL资源的vector容器和heap技术实现Huffman编码算法编程,并比较两种实现方法的性能,指出使用STL资源时需要注意的事项.

  15. 基于Huffman编码的DSP图像无损压缩系统%DSP Lossless Image Compression System Based on Huffman Coding

    Institute of Scientific and Technical Information of China (English)

    邹文辉

    2014-01-01

    当今社会是一个大数据时代,信息量巨大.每天一睁开双眼,图像和视频就席卷而来.人们对图像的依赖越来越多,对图像的要求也越来越高,既追求保真度高,又希望占用空间少,因此对图像压缩也提出了更高的要求.本系统基于TMS320DM6437平台搭建,利用Huffman编码实现图像无损压缩,压缩比达1.77.

  16. Efficient Huffman Codes-based Symmetrical-key Cryptography%基于Huffman编码的高效对称密码体制研究

    Institute of Scientific and Technical Information of China (English)

    魏茜; 龙冬阳

    2010-01-01

    当前网络中大规模数据的存储和传输需求使得数据压缩与加密相结合的研究引起了越来越多研究者的关注.虽然在信元的概率密度函数(Possibility Mass Function,PMF)保密的前提下使用Huffman编码压缩数据后得到的编码序列极难破译,但该方法中作为密钥的PMF安全性差且难于存储和传输因此很难被实际应用.为解决这个问题本文提出一种基于Huffman编码的一次一密高安全性对称密码体制.该方案使用具有多项式时间复杂度的Huffman树重构算法与有限域插值算法生成密钥,能够保证密钥长度非常短且在密钥被部分获取的情况下对加密体制的破解依然困难.此外本文证明方案的有效性和安全性并给出一个应用实例.

  17. New Data Compression Algorithm Based on Huffman Coding%运用Huffman编码进行数据压缩的新算法

    Institute of Scientific and Technical Information of China (English)

    何昭青

    2008-01-01

    探讨研究文件压缩的一种新思路,在进行文件压缩时,把文件看成为"0"和"1"组成的二进制流,定义若干个二进制位为一个"字",这样文件就是由"字"组成的流,统计这些不同"字"出现的概率,然后利用Huffman算法进行编码压缩;讨论了各类文件在不同"字"下的压缩情况,并给出各种情况下的实验结果.

  18. LOB Data Exchange Based on Huffman Coding and XML%基于Huffman编码与XML的大对象数据交换

    Institute of Scientific and Technical Information of China (English)

    贾长云; 朱跃龙; 朱敏

    2006-01-01

    XML作为异构数据交换的标准格式在数据交换平台中得到了广泛的应用,多媒体数据由于其容量巨大在数据库中往往作为大对象数据来保存,因此在异构数据交换中必然涉及到大对象数据交换的问题.文章讨论了Huffman编码的原理并提出了基于XML使用Huffman编码方式实现大对象数据交换的方法,设计了相应的实现模型,对异构数据库大对象数据交换的实现具有一定的借鉴意义.

  19. The MP3 Steganography Algorithm Based on Huffman Coding%基于Huffman编码的MP3隐写算法

    Institute of Scientific and Technical Information of China (English)

    高海英

    2007-01-01

    针对MP3音频的编码特点,提出了基于Huffman码字替换原理的音频隐写算法.与以往的MP3隐写算法相比,该算法直接在MP3帧数据流中的Huffman码字上嵌入隐蔽信息,不需要局部解码,具有透明度高、嵌入量大、计算量小的特点.通过实验分析了算法的透明性、嵌入量、码字的统计特性等方面的特点.

  20. 基于Huffman编码的图像压缩解压研究%Huffman-based Coding of Image Compression Decompression

    Institute of Scientific and Technical Information of China (English)

    饶兴

    2011-01-01

    根据BMP图像的特点,提出了基于Huffman编码的压缩方法,分别采用RGB统一编码和RGB分别编码两种方式对图像进行压缩和解压程序设计,然后对多幅图像进行了压缩和解压实验,最后对实验结果进行了相关的分析.

  1. The Demo Animation Design of Huffman Coding Process Based on Flash%基于Flash的Huffman编码过程的演示动画设计

    Institute of Scientific and Technical Information of China (English)

    魏三强

    2013-01-01

    Huffman编码是数据压缩技术中的一个重要的知识点,很有必要运用最佳的现代化教学手段传播该知识.通过使用Flash软件及其ActionScript编程技术制作的演示动画课件,构建了新的视觉文化,实现了Huffman编码过程的较高仿真演示.由于它具有形象直观、生动有趣、易于学习等特点,对于提高Huffman编码知识点的教与学的效率,具有一定的辅助作用.

  2. Efficient coding and decoding algorithm based on generalized Huffman tree%基于广义规范Huffman树的高效编解码算法

    Institute of Scientific and Technical Information of China (English)

    郭建光; 张卫杰; 杨健; 安文韬; 熊涛

    2009-01-01

    为了减少编码时消耗的时间和空间,以便适应实时处理,提出了基于广义规范Huffman树的高效数据压缩算法.该算法利用层次和概率表顺序,保证编、解码的唯一性;利用移动排序替代搜索;建立索引表来简化排序操作;融入均衡编码的思想.同时,根据编码思想提出了相应的解码算法.通过实际数据验证,与传统的Huffnmn算法相比,该算法在时间和空间效率上有了一定提高,且使得码字更为均衡.

  3. The Methed to Compress the File Using Huffman Code%用Huffman编码进行文件压缩的方法

    Institute of Scientific and Technical Information of China (English)

    潘玮华

    2010-01-01

    介绍了使用Huffman编码进行文件压缩的思路和压缩的方法.详细阐述了该方法所用类的设计和压缩、解压的具体设计方法,并给出使用C++语言描述的完整的程序.

  4. Method for Face Identification with Facial Action Coding System: FACS Based on Eigen Value Decomposion

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2012-12-01

    Full Text Available Method for face identification based on eigen value decomposition together with tracing trajectories in the eigen space after the eigen value decomposition is proposed. The proposed method allows person to person differences due to faces in the different emotions. By using the well known action unit approach, the proposed method admits the faces in the different emotions. Experimental results show that recognition performance depends on the number of targeted peoples. The face identification rate is 80% for four peoples of targeted number while 100% is achieved for the number of targeted number of peoples is two.

  5. Parallel implementation of a dynamic unstructured chimera method in the DLR finite volume TAU-code

    Energy Technology Data Exchange (ETDEWEB)

    Madrane, A.; Raichle, A.; Stuermer, A. [German Aerospace Center, DLR, Numerical Methods, Inst. of Aerodynamics and Flow Technology, Braunschweig (Germany)]. E-mail: aziz.madrane@dlr.de

    2004-07-01

    Aerodynamic problems involving moving geometries have many applications, including store separation, high-speed train entering into a tunnel, simulation of full configurations of the helicopter and fast maneuverability. Overset grid method offers the option of calculating these procedures. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping unstructured grids that update and exchange boundary information through interpolation. However, such computations are complicated and time consuming. Parallel computing offers a very effective way to improve the productivity in doing computational fluid dynamics (CFD). Therefore the purpose of this study is to develop an efficient parallel computation algorithm for analyzing the flowfield of complex geometries using overset grids method. The strategy adopted in the parallelization of the overset grids method including the use of data structures and communication, is described. Numerical results are presented to demonstrate the efficiency of the resulting parallel overset grids method. (author)

  6. Fast minimum-redundancy prefix coding for real-time space data compression

    Science.gov (United States)

    Huang, Bormin

    2007-09-01

    The minimum-redundancy prefix-free code problem is to determine an array l = {l I ,..., f n} of n integer codeword lengths, given an array f = {f I ,..., f n} of n symbol occurrence frequencies, such that the Kraft-McMillan inequality [equation] holds and the number of the total coded bits [equation] is minimized. Previous minimum-redundancy prefix-free code based on Huffman's greedy algorithm solves this problem in O (n) time if the input array f is sorted; but in O (n log n) time if f is unsorted. In this paper a fast algorithm is proposed to solve this problem in linear time if f is unsorted. It is suitable for real-time applications in satellite communication and consumer electronics. We also develop its VLSI architecture that consists of four modules, namely, the frequency table builder, the codeword length table builder, the codeword table builder, and the input-to-codeword mapper.

  7. User manual for version 4.3 of the Tripoli-4 Monte-Carlo method particle transport computer code; Notice d'utilisation du code Tripoli-4, version 4.3: code de transport de particules par la methode de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Both, J.P.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B

    2003-07-01

    This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate k{sub eff} (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)

  8. Use of an Accurate DNS Particulate Flow Method to Supply and Validate Boundary Conditions for the MFIX Code

    Energy Technology Data Exchange (ETDEWEB)

    Zhi-Gang Feng

    2012-05-31

    The simulation of particulate flows for industrial applications often requires the use of two-fluid models, where the solid particles are considered as a separate continuous phase. One of the underlining uncertainties in the use of the two-fluid models in multiphase computations comes from the boundary condition of the solid phase. Typically, the gas or liquid fluid boundary condition at a solid wall is the so called no-slip condition, which has been widely accepted to be valid for single-phase fluid dynamics provided that the Knudsen number is low. However, the boundary condition for the solid phase is not well understood. The no-slip condition at a solid boundary is not a valid assumption for the solid phase. Instead, several researchers advocate a slip condition as a more appropriate boundary condition. However, the question on the selection of an exact slip length or a slip velocity coefficient is still unanswered. Experimental or numerical simulation data are needed in order to determinate the slip boundary condition that is applicable to a two-fluid model. The goal of this project is to improve the performance and accuracy of the boundary conditions used in two-fluid models such as the MFIX code, which is frequently used in multiphase flow simulations. The specific objectives of the project are to use first principles embedded in a validated Direct Numerical Simulation particulate flow numerical program, which uses the Immersed Boundary method (DNS-IB) and the Direct Forcing scheme in order to establish, modify and validate needed energy and momentum boundary conditions for the MFIX code. To achieve these objectives, we have developed a highly efficient DNS code and conducted numerical simulations to investigate the particle-wall and particle-particle interactions in particulate flows. Most of our research findings have been reported in major conferences and archived journals, which are listed in Section 7 of this report. In this report, we will present a

  9. Clipping and Coding Audio Files: A Research Method to Enable Participant Voice

    Directory of Open Access Journals (Sweden)

    Susan Crichton

    2005-09-01

    Full Text Available Qualitative researchers have long used ethnographic methods to make sense of complex human activities and experiences. Their blessing is that through them, researchers can collect a wealth of raw data. Their challenge is that they require the researcher to find patterns and organize the various themes and concepts that emerge during the analysis stage into a coherent narrative that a reader can follow. In this article, the authors introduce a technology-enhanced data collection and analysis method based on clipped audio files. They suggest not only that the use of appropriate software and hardware can help in this process but, in fact, that their use can honor the participants' voices, retaining the original three-dimensional recording well past the data collection stage.

  10. Ultraspectral sounder data compression using the non-exhaustive Tunstall coding

    Science.gov (United States)

    Wei, Shih-Chieh; Huang, Bormin

    2008-08-01

    With its bulky volume, the ultraspectral sounder data might still suffer a few bits of error after channel coding. Therefore it is beneficial to incorporate some mechanism in source coding for error containment. The Tunstall code is a variable-to- fixed length code which can reduce the error propagation encountered in fixed-to-variable length codes like Huffman and arithmetic codes. The original Tunstall code uses an exhaustive parse tree where internal nodes extend every symbol in branching. It might result in assignment of precious codewords to less probable parse strings. Based on an infinitely extended parse tree, a modified Tunstall code is proposed which grows an optimal non-exhaustive parse tree by assigning the complete codewords only to top probability nodes in the infinite tree. Comparison will be made among the original exhaustive Tunstall code, our modified non-exhaustive Tunstall code, the CCSDS Rice code, and JPEG-2000 in terms of compression ratio and percent error rate using the ultraspectral sounder data.

  11. MAXED, a computer code for the deconvolution of multisphere neutron spectrometer data using the maximum entropy method

    Energy Technology Data Exchange (ETDEWEB)

    Reginatto, M.; Goldhagen, P.

    1998-06-01

    The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user`s guide for the code MAXED is included in an appendix. The code is available from the authors upon request.

  12. Systems and methods to control multiple peripherals with a single-peripheral application code

    Science.gov (United States)

    Ransom, Ray M.

    2013-06-11

    Methods and apparatus are provided for enhancing the BIOS of a hardware peripheral device to manage multiple peripheral devices simultaneously without modifying the application software of the peripheral device. The apparatus comprises a logic control unit and a memory in communication with the logic control unit. The memory is partitioned into a plurality of ranges, each range comprising one or more blocks of memory, one range being associated with each instance of the peripheral application and one range being reserved for storage of a data pointer related to each peripheral application of the plurality. The logic control unit is configured to operate multiple instances of the control application by duplicating one instance of the peripheral application for each peripheral device of the plurality and partitioning a memory device into partitions comprising one or more blocks of memory, one partition being associated with each instance of the peripheral application. The method then reserves a range of memory addresses for storage of a data pointer related to each peripheral device of the plurality, and initializes each of the plurality of peripheral devices.

  13. A color-code based method for the interpretation of plantar pressure measurements in clinical gait analysis.

    Science.gov (United States)

    Deschamps, Kevin; Staes, Filip; Desmet, Dirk; Roosen, Philip; Matricali, Giovanni Arnoldo; Keijsers, Noel; Nobels, Frank; Tits, Jos; Bruyninckx, Herman

    2015-03-01

    Comparing plantar pressure measurements (PPM) of a patient following an intervention or between a reference group and a patient-group is common practice in clinical gait analysis. However, this process is often time consuming and complex, and commercially available software often lacks powerful visualization and interpretation tools. In this paper, we propose a simple method for displaying pixel-level PPM deviations relative to a so-called reference PPM pattern. The novel method contains 3 distinct stages: (1) a normalization of pedobarographic fields (for foot length and width), (2) a pixel-level z-score based calculation and, (3) color coding of the normalized pedobarographic fields. The methodological steps associated to this novel method are precisely described and clinical output illustrated. We believe that the advantages of the novel method cover several domains. The strongest advantage of the novel method is that it provides a straightforward visual interpretation of PPM without decreasing the resolution perspective. A second advantage is that it may guide the selection of a local mapping technique (data reduction technique). Finally, it may be easily used as education tool during the therapist-patient interaction.

  14. Modeling Methods for the Main Switch of High Pulsed-Power Facilities Based on Transmission Line Code

    Science.gov (United States)

    Hu, Yixiang; Zeng, Jiangtao; Sun, Fengju; Wei, Hao; Yin, Jiahui; Cong, Peitian; Qiu, Aici

    2014-09-01

    Based on the transmission line code (TLCODE), a circuit model is developed here for analyses of main switches in the high pulsed-power facilities. With the structure of the ZR main switch as an example, a circuit model topology of the switch is proposed, and in particular, calculation methods of the dynamic inductance and resistance of the switching arc are described. Moreover, a set of closed equations used for calculations of various node voltages are theoretically derived and numerically discretized. Based on these discrete equations and the Matlab program, a simulation procedure is established for analyses of the ZR main switch. Voltages and currents at different key points are obtained, and comparisons are made with those of a PSpice L-C model. The comparison results show that these two models are perfectly in accord with each other with discrepancy less than 0.1%, which verifies the effectiveness of the TLCODE model to a certain extent.

  15. 快速定位的QR码校正方法%QR code correction method by real-time location

    Institute of Scientific and Technical Information of China (English)

    王雄华; 张昕; 朱同林

    2015-01-01

    针对传统的 QR码校正算法在光照、拍摄角度影响下会导致低校正率、高运算量问题,提出一种基于图像特征的QR码校正算法。对图像二值化,在进行行扫描时引入冗余点剔除过程,准确获取条码左上、右上和左下3个顶点坐标,基于边界黑色像素点间隔抽样和斜率偏离度容错处理快速找到第4个顶点,采用逆投影变换完成图像的几何校正。该算法抗光照干扰能力强,具有较高的识别成功率,可以在多种不同光照条件下,对不同拍摄角度方向的图像进行定位校正。实验结果表明,该算法能有效提高 QR码识别成功率,满足其实时性的需求。%Aiming at the problem that the traditional QR code correction algorithms have low correction rate and a large amount of calculation when the images are collected in the insufficient light or at the changing shooting angle,a correction algorithm of QR code based on image characteristic was proposed.Image binarization was used to pretreat the barcode,and the upper left, upper right and lower left three vertices of the quadrilateral were obtained accurately using the method of line scanning and re-dundant point elimination procedure,besides the fourth vertex was got based on the boundary of black pixel interval sampling and the j udgment of slope deviation.Inverse perspective transformation was used to geometrically correct image effectively.The al-gorithm has stronger resistant ability of light interference and higher recognition success rate under a variety of different illumina-tion conditions and at different shooting angle directions.Experimental results show that the method can greatly improve the QR code recognition efficiency,and meet the needs of real-time performance.

  16. A New Aspergillus fumigatus Typing Method Based on Hypervariable Tandem Repeats Located within Exons of Surface Protein Coding Genes (TRESP).

    Science.gov (United States)

    Garcia-Rubio, Rocio; Gil, Horacio; Monteiro, Maria Candida; Pelaez, Teresa; Mellado, Emilia

    2016-01-01

    Aspergillus fumigatus is a saprotrophic mold fungus ubiquitously found in the environment and is the most common species causing invasive aspergillosis in immunocompromised individuals. For A. fumigatus genotyping, the short tandem repeat method (STRAf) is widely accepted as the first choice. However, difficulties associated with PCR product size and required technology have encouraged the development of novel typing techniques. In this study, a new genotyping method based on hypervariable tandem repeats within exons of surface protein coding genes (TRESP) was designed. A. fumigatus isolates were characterized by PCR amplification and sequencing with a panel of three TRESP encoding genes: cell surface protein A; MP-2 antigenic galactomannan protein; and hypothetical protein with a CFEM domain. The allele sequence repeats of each of the three targets were combined to assign a specific genotype. For the evaluation of this method, 126 unrelated A. fumigatus strains were analyzed and 96 different genotypes were identified, showing a high level of discrimination [Simpson's index of diversity (D) 0.994]. In addition, 49 azole resistant strains were analyzed identifying 26 genotypes and showing a lower D value (0.890) among them. This value could indicate that these resistant strains are closely related and share a common origin, although more studies are needed to confirm this hypothesis. In summary, a novel genotyping method for A. fumigatus has been developed which is reproducible, easy to perform, highly discriminatory and could be especially useful for studying outbreaks.

  17. A New Aspergillus fumigatus Typing Method Based on Hypervariable Tandem Repeats Located within Exons of Surface Protein Coding Genes (TRESP)

    Science.gov (United States)

    Garcia-Rubio, Rocio; Gil, Horacio; Monteiro, Maria Candida; Pelaez, Teresa; Mellado, Emilia

    2016-01-01

    Aspergillus fumigatus is a saprotrophic mold fungus ubiquitously found in the environment and is the most common species causing invasive aspergillosis in immunocompromised individuals. For A. fumigatus genotyping, the short tandem repeat method (STRAf) is widely accepted as the first choice. However, difficulties associated with PCR product size and required technology have encouraged the development of novel typing techniques. In this study, a new genotyping method based on hypervariable tandem repeats within exons of surface protein coding genes (TRESP) was designed. A. fumigatus isolates were characterized by PCR amplification and sequencing with a panel of three TRESP encoding genes: cell surface protein A; MP-2 antigenic galactomannan protein; and hypothetical protein with a CFEM domain. The allele sequence repeats of each of the three targets were combined to assign a specific genotype. For the evaluation of this method, 126 unrelated A. fumigatus strains were analyzed and 96 different genotypes were identified, showing a high level of discrimination [Simpson’s index of diversity (D) 0.994]. In addition, 49 azole resistant strains were analyzed identifying 26 genotypes and showing a lower D value (0.890) among them. This value could indicate that these resistant strains are closely related and share a common origin, although more studies are needed to confirm this hypothesis. In summary, a novel genotyping method for A. fumigatus has been developed which is reproducible, easy to perform, highly discriminatory and could be especially useful for studying outbreaks. PMID:27701437

  18. Comparison of a Label-Free Quantitative Proteomic Method Based on Peptide Ion Current Area to the Isotope Coded Affinity Tag Method

    Directory of Open Access Journals (Sweden)

    Young Ah Goo

    2008-01-01

    Full Text Available Recently, several research groups have published methods for the determination of proteomic expression profiling by mass spectrometry without the use of exogenously added stable isotopes or stable isotope dilution theory. These so-called label-free, methods have the advantage of allowing data on each sample to be acquired independently from all other samples to which they can later be compared in silico for the purpose of measuring changes in protein expression between various biological states. We developed label free software based on direct measurement of peptide ion current area (PICA and compared it to two other methods, a simpler label free method known as spectral counting and the isotope coded affinity tag (ICAT method. Data analysis by these methods of a standard mixture containing proteins of known, but varying, concentrations showed that they performed similarly with a mean squared error of 0.09. Additionally, complex bacterial protein mixtures spiked with known concentrations of standard proteins were analyzed using the PICA label-free method. These results indicated that the PICA method detected all levels of standard spiked proteins at the 90% confidence level in this complex biological sample. This finding confirms that label-free methods, based on direct measurement of the area under a single ion current trace, performed as well as the standard ICAT method. Given the fact that the label-free methods provide ease in experimental design well beyond pair-wise comparison, label-free methods such as our PICA method are well suited for proteomic expression profiling of large numbers of samples as is needed in clinical analysis.

  19. A development and integration of database code-system with a compilation of comparator, k0 and absolute methods for INAA using microsoft access

    Science.gov (United States)

    Hoh, Siew Sin; Rapie, Nurul Nadiah; Lim, Edwin Suh Wen; Tan, Chun Yuan; Yavar, Alireza; Sarmani, Sukiman; Majid, Amran Ab.; Khoo, Kok Siong

    2013-05-01

    Instrumental Neutron Activation Analysis (INAA) is often used to determine and calculate the elemental concentrations of a sample at The National University of Malaysia (UKM) typically in Nuclear Science Programme, Faculty of Science and Technology. The objective of this study was to develop a database code-system based on Microsoft Access 2010 which could help the INAA users to choose either comparator method, k0-method or absolute method for calculating the elemental concentrations of a sample. This study also integrated k0data, Com-INAA, k0Concent, k0-Westcott and Abs-INAA to execute and complete the ECC-UKM database code-system. After the integration, a study was conducted to test the effectiveness of the ECC-UKM database code-system by comparing the concentrations between the experiments and the code-systems. 'Triple Bare Monitor' Zr-Au and Cr-Mo-Au were used in k0Concent, k0-Westcott and Abs-INAA code-systems as monitors to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration were net peak area (Np), measurement time (tm), irradiation time (tirr), k-factor (k), thermal to epithermal neutron flux ratio (f), parameters of the neutron flux distribution epithermal (α) and detection efficiency (ɛp). For Com-INAA code-system, certified reference material IAEA-375 Soil was used to calculate the concentrations of elements in a sample. Other CRM and SRM were also used in this database codesystem. Later, a verification process to examine the effectiveness of the Abs-INAA code-system was carried out by comparing the sample concentrations between the code-system and the experiment. The results of the experimental concentration values of ECC-UKM database code-system were performed with good accuracy.

  20. Study on methods for improving compressibility of 4-direction Freeman chain code%Freeman四方向链码压缩率提高的方法研究

    Institute of Scientific and Technical Information of China (English)

    李灵华; 刘勇奎

    2013-01-01

    文中通过大量的实验,在研究现有的基于Freeman方向链码的方法的基础上,对提高Freeman四方向链码压缩率的方法进行了深入的研究.从改变码值含义定义并对码值进行Huffman编码,进而对出现频率最高的码值进行计算编码等不同角度,进行大量的实验、比较与分析.提出了一个Freeman四方向链码新方法:计算编码不等长相对四方向Freeman链码——AVRF4.实验结果表明,其链码压缩率比Freeman八方向链码提高了26%,而比原始Freeman四方向链码提高了15%.%To study the methods for improving the efficiency of 4-direction Freeman chain code, the methods based on existing Freeman direction chain code is researched though a large number of experiments. A large number of experiments, comparison and analysis are carried from the different views, such as changing the definition of code elements and employing combining encoding for code elements, and applying arithmetic encoding for the code elements with highest probability. At last, a new method based on 4-direction Freeman chain code entitled arithmetic encoding variable-length relative 4-direction Freeman chain code, namely AVRF4 is put forward. The experimental results show that the compressibility of AVRF4 increases 26% more than 8-di-rection Freeman chain code and 15% more than 4-direction Freeman chain code.

  1. The impact of conventional dietary intake data coding methods on foods typically consumed by low-income African-American and White urban populations.

    Science.gov (United States)

    Mason, Marc A; Fanelli Kuczmarski, Marie; Allegro, Deanne; Zonderman, Alan B; Evans, Michele K

    2015-08-01

    Analysing dietary data to capture how individuals typically consume foods is dependent on the coding variables used. Individual foods consumed simultaneously, like coffee with milk, are given codes to identify these combinations. Our literature review revealed a lack of discussion about using combination codes in analysis. The present study identified foods consumed at mealtimes and by race when combination codes were or were not utilized. Duplicate analysis methods were performed on separate data sets. The original data set consisted of all foods reported; each food was coded as if it was consumed individually. The revised data set was derived from the original data set by first isolating coded foods consumed as individual items from those foods consumed simultaneously and assigning a code to designate a combination. Foods assigned a combination code, like pancakes with syrup, were aggregated and associated with a food group, defined by the major food component (i.e. pancakes), and then appended to the isolated coded foods. Healthy Aging in Neighborhoods of Diversity across the Life Span study. African-American and White adults with two dietary recalls (n 2177). Differences existed in lists of foods most frequently consumed by mealtime and race when comparing results based on original and revised data sets. African Americans reported consumption of sausage/luncheon meat and poultry, while ready-to-eat cereals and cakes/doughnuts/pastries were reported by Whites on recalls. Use of combination codes provided more accurate representation of how foods were consumed by populations. This information is beneficial when creating interventions and exploring diet-health relationships.

  2. Comparison of dose estimates using the buildup-factor method and a Baryon transport code (BRYNTRN) with Monte Carlo results

    Science.gov (United States)

    Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.

    1990-01-01

    Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

  3. An improved method for identification of small non-coding RNAs in bacteria using support vector machine

    Science.gov (United States)

    Barman, Ranjan Kumar; Mukhopadhyay, Anirban; Das, Santasabuj

    2017-04-01

    Bacterial small non-coding RNAs (sRNAs) are not translated into proteins, but act as functional RNAs. They are involved in diverse biological processes like virulence, stress response and quorum sensing. Several high-throughput techniques have enabled identification of sRNAs in bacteria, but experimental detection remains a challenge and grossly incomplete for most species. Thus, there is a need to develop computational tools to predict bacterial sRNAs. Here, we propose a computational method to identify sRNAs in bacteria using support vector machine (SVM) classifier. The primary sequence and secondary structure features of experimentally-validated sRNAs of Salmonella Typhimurium LT2 (SLT2) was used to build the optimal SVM model. We found that a tri-nucleotide composition feature of sRNAs achieved an accuracy of 88.35% for SLT2. We validated the SVM model also on the experimentally-detected sRNAs of E. coli and Salmonella Typhi. The proposed model had robustly attained an accuracy of 81.25% and 88.82% for E. coli K-12 and S. Typhi Ty2, respectively. We confirmed that this method significantly improved the identification of sRNAs in bacteria. Furthermore, we used a sliding window-based method and identified sRNAs from complete genomes of SLT2, S. Typhi Ty2 and E. coli K-12 with sensitivities of 89.09%, 83.33% and 67.39%, respectively.

  4. Novel methods for the molecular discrimination of Fasciola spp. on the basis of nuclear protein-coding genes.

    Science.gov (United States)

    Shoriki, Takuya; Ichikawa-Seki, Madoka; Suganuma, Keisuke; Naito, Ikunori; Hayashi, Kei; Nakao, Minoru; Aita, Junya; Mohanta, Uday Kumar; Inoue, Noboru; Murakami, Kenji; Itagaki, Tadashi

    2016-06-01

    Fasciolosis is an economically important disease of livestock caused by Fasciola hepatica, Fasciola gigantica, and aspermic Fasciola flukes. The aspermic Fasciola flukes have been discriminated morphologically from the two other species by the absence of sperm in their seminal vesicles. To date, the molecular discrimination of F. hepatica and F. gigantica has relied on the nucleotide sequences of the internal transcribed spacer 1 (ITS1) region. However, ITS1 genotypes of aspermic Fasciola flukes cannot be clearly differentiated from those of F. hepatica and F. gigantica. Therefore, more precise and robust methods are required to discriminate Fasciola spp. In this study, we developed PCR restriction fragment length polymorphism and multiplex PCR methods to discriminate F. hepatica, F. gigantica, and aspermic Fasciola flukes on the basis of the nuclear protein-coding genes, phosphoenolpyruvate carboxykinase and DNA polymerase delta, which are single locus genes in most eukaryotes. All aspermic Fasciola flukes used in this study had mixed fragment pattern of F. hepatica and F. gigantica for both of these genes, suggesting that the flukes are descended through hybridization between the two species. These molecular methods will facilitate the identification of F. hepatica, F. gigantica, and aspermic Fasciola flukes, and will also prove useful in etiological studies of fasciolosis. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. 哈夫曼树Huffer man构成原理应用及其数学证明%Application of Huffman Tree Principle and its Mathematical Proof

    Institute of Scientific and Technical Information of China (English)

    江忠

    2016-01-01

    哈夫曼树又名最优二叉树,是一种构造带权路径长度最短的二叉树。所有树的带权路径长度,即是树中所有的叶子结点的权值乘以其到根结点的路径长度(若根root结点为0层,叶结点到根结点的路径长度就是叶结点的层数)。二叉树的带权路径长度可记为WPL值=(W1*L1+W2*L2+W3*L3+…+Wn*Ln),n个权重值Wi(i=1,2,...n)构成一棵拥有n个叶结点的二叉树,其相应的叶结点的路径长度为Li(i=1,2,…,n)。能够证明哈夫曼树的WPL的取值是最小的。%Huffman tree, also known as the optimal binary tree, is a kind of special weighted shortest path length of the binary tree. The tree weighted path length is the right of all the leaves in the tree node value multi-plied by its path length to the root node (if root node is layer 0, path length of leaf nodes to root node is a leaf node layer). Binary tree weighted path length can be written to WPL =(W1*L1+W2*L2+W3*L3+…+Wn* Ln), the weights of N Wi (I = 1,2,…,n) constitute a tree that has n leaf nodes of a binary tree, and its corresponding path length of the leaf nodes is Li (I = 1,2,…,n). This proves that the WPL of Huffman tree is the smallest.

  6. A new method for evaluating compliance with industry self-regulation codes governing the content of alcohol advertising.

    Science.gov (United States)

    Babor, Thomas F; Xuan, Ziming; Damon, Donna

    2013-10-01

    This study evaluated the use of a modified Delphi technique in combination with a previously developed alcohol advertising rating procedure to detect content violations in the U.S. Beer Institute Code. A related aim was to estimate the minimum number of raters needed to obtain reliable evaluations of code violations in television commercials. Six alcohol ads selected for their likelihood of having code violations were rated by community and expert participants (N = 286). Quantitative rating scales were used to measure the content of alcohol advertisements based on alcohol industry self-regulatory guidelines. The community group participants represented vulnerability characteristics that industry codes were designed to protect (e.g., age code violations. The Delphi technique facilitates consensus development around code violations in alcohol ad content and may enhance the ability of regulatory agencies to monitor the content of alcoholic beverage advertising when combined with psychometric-based rating procedures. Copyright © 2013 by the Research Society on Alcoholism.

  7. A Bipartite Network-based Method for Prediction of Long Non-coding RNA–protein Interactions

    Directory of Open Access Journals (Sweden)

    Mengqu Ge

    2016-02-01

    Full Text Available As one large class of non-coding RNAs (ncRNAs, long ncRNAs (lncRNAs have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI. LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR and protein-based collaborative filtering (ProCF. Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.

  8. A Bipartite Network-based Method for Prediction of Long Non-coding RNA-protein Interactions

    Institute of Scientific and Technical Information of China (English)

    Mengqu Ge; Ao Li; Minghui Wang

    2016-01-01

    As one large class of non-coding RNAs (ncRNAs), long ncRNAs (lncRNAs) have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI). LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV) test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR) and protein-based collaborative filtering (ProCF). Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.

  9. 基于相关匹配的QR码识别方法%QR code recognition method based on correlation match

    Institute of Scientific and Technical Information of China (English)

    熊用; 汪鲁才; 艾琼龙

    2011-01-01

    QR码图像识别是QR码应用中的关键技术.Hough变换、曲面拟合去背景、控制点变换等方法为QR码识别过程中图像预处理的基本方法,针对图像预处理后图像识别率低的缺点,提出了一种基于相关匹配法的QR码识别方法.基于曲面拟合的改进自适应阈值法分割图像.采用Hough变换和控制点变换方法校正图像的几何失真变形.利用模板对QR码进行相关匹配,对相干系数阈值处理得出取样网格.实验表明,本文算法能有效提高QR码识别效率和效果.%QR code recognition is the key technology in QR code application. Hough transformation, surface-fitting background removing and control point transformation are the essential methods of image preprocessing in QR code recognition. An improved QR code recognition method based on correlation match is proposed in this paper, which improves the low efficiency of the image recognition after image preprocessing. An improved adaptive threshold value method based on surface fitting is used to segment the QR code image. Hough transform and control point transformation methods are used to correct the geometric distortion deformation of the image. A template is used to carry out the match of the QR code image; and the coherent coefficient image is obtained easily. Then, a selected coherent coefficient threshold is processed and the sample grid image is obtained. Simulated results prove that the proposed method can greatly improve the QR code recognition efficiency and effect.

  10. Nested Quantum Error Correction Codes

    CERN Document Server

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko

    2009-01-01

    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  11. Dynamic Server-Based KML Code Generator Method for Level-of-Detail Traversal of Geospatial Data

    Science.gov (United States)

    Baxes, Gregory; Mixon, Brian; Linger, TIm

    2013-01-01

    Web-based geospatial client applications such as Google Earth and NASA World Wind must listen to data requests, access appropriate stored data, and compile a data response to the requesting client application. This process occurs repeatedly to support multiple client requests and application instances. Newer Web-based geospatial clients also provide user-interactive functionality that is dependent on fast and efficient server responses. With massively large datasets, server-client interaction can become severely impeded because the server must determine the best way to assemble data to meet the client applications request. In client applications such as Google Earth, the user interactively wanders through the data using visually guided panning and zooming actions. With these actions, the client application is continually issuing data requests to the server without knowledge of the server s data structure or extraction/assembly paradigm. A method for efficiently controlling the networked access of a Web-based geospatial browser to server-based datasets in particular, massively sized datasets has been developed. The method specifically uses the Keyhole Markup Language (KML), an Open Geospatial Consortium (OGS) standard used by Google Earth and other KML-compliant geospatial client applications. The innovation is based on establishing a dynamic cascading KML strategy that is initiated by a KML launch file provided by a data server host to a Google Earth or similar KMLcompliant geospatial client application user. Upon execution, the launch KML code issues a request for image data covering an initial geographic region. The server responds with the requested data along with subsequent dynamically generated KML code that directs the client application to make follow-on requests for higher level of detail (LOD) imagery to replace the initial imagery as the user navigates into the dataset. The approach provides an efficient data traversal path and mechanism that can be

  12. Development of an aeroelastic code based on three-dimensional viscous–inviscid method for wind turbine computations

    DEFF Research Database (Denmark)

    Sessarego, Matias; Ramos García, Néstor; Sørensen, Jens Nørkær

    2017-01-01

    Aerodynamic and structural dynamic performance analysis of modern wind turbines are routinely estimated in the wind energy field using computational tools known as aeroelastic codes. Most aeroelastic codes use the blade element momentum (BEM) technique to model the rotor aerodynamics and a modal...

  13. Web-MCQ: a set of methods and freely available open source code for administering online multiple choice question assessments.

    Science.gov (United States)

    Hewson, Claire

    2007-08-01

    E-learning approaches have received increasing attention in recent years. Accordingly, a number of tools have become available to assist the nonexpert computer user in constructing and managing virtual learning environments, and implementing computer-based and/or online procedures to support pedagogy. Both commercial and free packages are now available, with new developments emerging periodically. Commercial products have the advantage of being comprehensive and reliable, but tend to require substantial financial investment and are not always transparent to use. They may also restrict pedagogical choices due to their predetermined ranges of functionality. With these issues in mind, several authors have argued for the pedagogical benefits of developing freely available, open source e-learning resources, which can be shared and further developed within a community of educational practitioners. The present paper supports this objective by presenting a set of methods, along with supporting freely available, downloadable, open source programming code, to allow administration of online multiple choice question assessments to students.

  14. Ultraspectral sounder data compression using error-detecting reversible variable-length coding

    Science.gov (United States)

    Huang, Bormin; Ahuja, Alok; Huang, Hung-Lung; Schmit, Timothy J.; Heymann, Roger W.

    2005-08-01

    Nonreversible variable-length codes (e.g. Huffman coding, Golomb-Rice coding, and arithmetic coding) have been used in source coding to achieve efficient compression. However, a single bit error during noisy transmission can cause many codewords to be misinterpreted by the decoder. In recent years, increasing attention has been given to the design of reversible variable-length codes (RVLCs) for better data transmission in error-prone environments. RVLCs allow instantaneous decoding in both directions, which affords better detection of bit errors due to synchronization losses over a noisy channel. RVLCs have been adopted in emerging video coding standards--H.263+ and MPEG-4--to enhance their error-resilience capabilities. Given the large volume of three-dimensional data that will be generated by future space-borne ultraspectral sounders (e.g. IASI, CrIS, and HES), the use of error-robust data compression techniques will be beneficial to satellite data transmission. In this paper, we investigate a reversible variable-length code for ultraspectral sounder data compression, and present its numerical experiments on error propagation for the ultraspectral sounder data. The results show that the RVLC performs significantly better error containment than JPEG2000 Part 2.

  15. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  16. Tripoli-3: monte Carlo transport code for neutral particles - version 3.5 - users manual; Tripoli-3: code de transport des particules neutres par la methode de monte carlo - version 3.5 - manuel d'utilisation

    Energy Technology Data Exchange (ETDEWEB)

    Vergnaud, Th.; Nimal, J.C.; Chiron, M

    2001-07-01

    The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)

  17. Defeating the coding monsters.

    Science.gov (United States)

    Colt, Ross

    2007-02-01

    Accuracy in coding is rapidly becoming a required skill for military health care providers. Clinic staffing, equipment purchase decisions, and even reimbursement will soon be based on the coding data that we provide. Learning the complicated myriad of rules to code accurately can seem overwhelming. However, the majority of clinic visits in a typical outpatient clinic generally fall into two major evaluation and management codes, 99213 and 99214. If health care providers can learn the rules required to code a 99214 visit, then this will provide a 90% solution that can enable them to accurately code the majority of their clinic visits. This article demonstrates a step-by-step method to code a 99214 visit, by viewing each of the three requirements as a monster to be defeated.

  18. JND measurements and wavelet-based image coding

    Science.gov (United States)

    Shen, Day-Fann; Yan, Loon-Shan

    1998-06-01

    Two major issues in image coding are the effective incorporation of human visual system (HVS) properties and the effective objective measure for evaluating image quality (OQM). In this paper, we treat the two issues in an integrated fashion. We build a JND model based on the measurements of the JND (Just Noticeable Difference) property of HVS. We found that JND does not only depend on the background intensity but also a function of both spatial frequency and patten direction. Wavelet transform, due to its excellent simultaneous Time (space)/frequency resolution, is the best choice to apply the JND model. We mathematically derive an OQM called JND_PSNR that is based on the JND property and wavelet decomposed subbands. JND_PSNR is more consistent with human perception and is recommended as an alternative to the PSNR or SNR. With the JND_PSNR in mind, we proceed to propose a wavelet and JND based codec called JZW. JZW quantizes coefficients in each subband with proper step size according to the subband's importance to human perception. Many characteristics of JZW are discussed, its performance evaluated and compared with other famous algorithms such as EZW, SPIHT and TCCVQ. Our algorithm has 1 - 1.5 dB gain over SPIHT even when we use simple Huffman coding rather than the more efficient adaptive arithmetic coding.

  19. MAP decoding of variable length codes over noisy channels

    Science.gov (United States)

    Yao, Lei; Cao, Lei; Chen, Chang Wen

    2005-10-01

    In this paper, we discuss the maximum a-posteriori probability (MAP) decoding of variable length codes(VLCs) and propose a novel decoding scheme for the Huffman VLC coded data in the presence of noise. First, we provide some simulation results of VLC MAP decoding and highlight some features that have not been discussed yet in existing work. We will show that the improvement of MAP decoding over the conventional VLC decoding comes mostly from the memory information in the source and give some observations regarding the advantage of soft VLC MAP decoding over hard VLC MAP decoding when AWGN channel is considered. Second, with the recognition that the difficulty in VLC MAP decoding is the lack of synchronization between the symbol sequence and the coded bit sequence, which makes the parsing from the latter to the former extremely complex, we propose a new MAP decoding algorithm by integrating the information of self-synchronization strings (SSSs), one important feature of the codeword structure, into the conventional MAP decoding. A consistent performance improvement and decoding complexity reduction over the conventional VLC MAP decoding can be achieved with the new scheme.

  20. Computationally efficient sub-band coding of ECG signals.

    Science.gov (United States)

    Husøy, J H; Gjerde, T

    1996-03-01

    A data compression technique is presented for the compression of discrete time electrocardiogram (ECG) signals. The compression system is based on sub-band coding, a technique traditionally used for compressing speech and images. The sub-band coder employs quadrature mirror filter banks (QMF) with up to 32 critically sampled sub-bands. Both finite impulse response (FIR) and the more computationally efficient infinite impulse response (IIR) filter banks are considered as candidates in a complete ECG coding system. The sub-bands are threshold, quantized using uniform quantizers and run-length coded. The output of the run-length coder is further compressed by a Huffman coder. Extensive simulations indicate that 16 sub-bands are a suitable choice for this application. Furthermore, IIR filter banks are preferable due to their superiority in terms of computational efficiency. We conclude that the present scheme, which is suitable for real time implementation on a PC, can provide compression ratios between 5 and 15 without loss of clinical information.

  1. A Single Loop Vectorization Method Based on Assemble Code%一种基于汇编代码的单重循环向量化方法

    Institute of Scientific and Technical Information of China (English)

    陆洪毅; 戴葵; 王志英

    2003-01-01

    Through loops vectorization in instruction sequence, the vector power provided by hardware can be fully utilized. This paper analyzes the RISC instructton set, and presents a single loop vectorization method that is based on assemble code, it can efficiently detect single loops in instruct sequence and vectorize them.

  2. Adapting the coping in deliberation (CODE) framework: A multi-method approach in the context of familial ovarian cancer risk management

    NARCIS (Netherlands)

    Witt, J.; Elwyn, G.; Wood, F.; Rogers, M.T.; Menon, U.; Brain, K.

    2014-01-01

    OBJECTIVE: To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. METHODS: We performed a systematic literature search to

  3. Soft and Joint Source-Channel Decoding of Quasi-Arithmetic Codes

    Science.gov (United States)

    Guionnet, Thomas; Guillemot, Christine

    2004-12-01

    The issue of robust and joint source-channel decoding of quasi-arithmetic codes is addressed. Quasi-arithmetic coding is a reduced precision and complexity implementation of arithmetic coding. This amounts to approximating the distribution of the source. The approximation of the source distribution leads to the introduction of redundancy that can be exploited for robust decoding in presence of transmission errors. Hence, this approximation controls both the trade-off between compression efficiency and complexity and at the same time the redundancy ( excess rate) introduced by this suboptimality. This paper provides first a state model of a quasi-arithmetic coder and decoder for binary and[InlineEquation not available: see fulltext.]-ary sources. The design of an error-resilient soft decoding algorithm follows quite naturally. The compression efficiency of quasi-arithmetic codes allows to add extra redundancy in the form of markers designed specifically to prevent desynchronization. The algorithm is directly amenable for iterative source-channel decoding in the spirit of serial turbo codes. The coding and decoding algorithms have been tested for a wide range of channel signal-to-noise ratios (SNRs). Experimental results reveal improved symbol error rate (SER) and SNR performances against Huffman and optimal arithmetic codes.

  4. Reusing the legacy code based on the method of LC-WS%基于LC-WS的遗留代码重用

    Institute of Scientific and Technical Information of China (English)

    赵媛; 周立军; 宦婧

    2016-01-01

    针对目前存在的大量遗留代码,提出基于LC-WS将遗留代码进行包装、部署并且重用.通过Web Services的方式提供给访问者调用.采用LC-WS的方法,只需要付出低廉的代价就可以实现大量遗留代码在信息集成平台中的重新利用,既可以缩短开发周期,还可以降低开发风险.通过在已搭建信息集成平台中的实际应用,证明这一方法是可行的.%There are lots of legacy code in the old system, a new method LC-WS to reuse the legacy code is presented in this paper. The legacy code is wrapped and published into services which can be accessed by service invoker . By the method of LC-WS, a lot of legacy code can be reused at lower cost. The obtained result not only can shorten the research period, but also can lower exploitation risk. Experiments have proven the method is feasible.

  5. Development and Validation of a Three-Dimensional Diffusion Code Based on a High Order Nodal Expansion Method for Hexagonal-z Geometry

    Directory of Open Access Journals (Sweden)

    Daogang Lu

    2016-01-01

    Full Text Available A three-dimensional, multigroup, diffusion code based on a high order nodal expansion method for hexagonal-z geometry (HNHEX was developed to perform the neutronic analysis of hexagonal-z geometry. In this method, one-dimensional radial and axial spatially flux of each node and energy group are defined as quadratic polynomial expansion and four-order polynomial expansion, respectively. The approximations for one-dimensional radial and axial spatially flux both have second-order accuracy. Moment weighting is used to obtain high order expansion coefficients of the polynomials of one-dimensional radial and axial spatially flux. The partially integrated radial and axial leakages are both approximated by the quadratic polynomial. The coarse-mesh rebalance method with the asymptotic source extrapolation is applied to accelerate the calculation. This code is used for calculation of effective multiplication factor, neutron flux distribution, and power distribution. The numerical calculation in this paper for three-dimensional SNR and VVER 440 benchmark problems demonstrates the accuracy of the code. In addition, the results show that the accuracy of the code is improved by applying quadratic approximation for partially integrated axial leakage and four-order approximation for one-dimensional axial spatially flux in comparison to flat approximation for partially integrated axial leakage and quadratic approximation for one-dimensional axial spatially flux.

  6. Life With and Without Coding: Two Methods for Early-Stage Data Analysis in Qualitative Research Aiming at Causal Explanations

    NARCIS (Netherlands)

    Gläser, Jochen; Laudel, Grit

    2013-01-01

    Qualitative research aimed at "mechanismic" explanations poses specific challenges to qualitative data analysis because it must integrate existing theory with patterns identified in the data. We explore the utilization of two methods—coding and qualitative content analysis—for the first steps in the

  7. Life With and Without Coding: Two Methods for Early-Stage Data Analysis in Qualitative Research Aiming at Causal Explanations

    NARCIS (Netherlands)

    Gläser, Jochen; Laudel, Grit

    2013-01-01

    Qualitative research aimed at "mechanismic" explanations poses specific challenges to qualitative data analysis because it must integrate existing theory with patterns identified in the data. We explore the utilization of two methods—coding and qualitative content analysis—for the first steps in the

  8. Holographic codes

    CERN Document Server

    Latorre, Jose I

    2015-01-01

    There exists a remarkable four-qutrit state that carries absolute maximal entanglement in all its partitions. Employing this state, we construct a tensor network that delivers a holographic many body state, the H-code, where the physical properties of the boundary determine those of the bulk. This H-code is made of an even superposition of states whose relative Hamming distances are exponentially large with the size of the boundary. This property makes H-codes natural states for a quantum memory. H-codes exist on tori of definite sizes and get classified in three different sectors characterized by the sum of their qutrits on cycles wrapped through the boundaries of the system. We construct a parent Hamiltonian for the H-code which is highly non local and finally we compute the topological entanglement entropy of the H-code.

  9. A Research of Channel Coding Simulation Method Based on UAV Data Link%无人机数据链信道编码模拟方法研究

    Institute of Scientific and Technical Information of China (English)

    郭淑霞; 刘冰; 高颖; 黄国栋

    2011-01-01

    Channel coding is an important approach to improve the reliability of communication.To satisfy reliability requirements of unmanned aerial vehicle (UAV) data link when transmits remote measurement and remote-controlled commands, this paper researches one channel coding simulation method based on UAV data link.Through code stream generating of channel coding、 transmission channel model loading、 microwave instruments real-time drive、 multithread programming and thread synchronization these key technologies, the method have simulated the principle and method of encode and decode in microwave anechoic chamber with different code rate of convolution codes、Turbo codes and LDPC codes; The testified result of simulation has shown that it has validated channel encode and decode approach of UAV data link through the simulated method of channel coding, and made the data link system's bit-error-ratio below 10-5, successfully to meet the high reliability transmission requirements of the UAV data link.%信道编码是提高通信可靠性的重要途径,针对无人机数据链传输遥控/遥测指令时的高可靠性要求,研究了无人机数据链信道编码模拟方法;该方法通过信道编码码流生成、传输信道模型的解算与加载、微波仪表实时驱动、多线程编程与线程同步等技术,在微波暗室环境内模拟了码率可变的卷积码、Turbo码、LDPC码的编解码算法;仿真验证结果表明,通过信道编码模拟方法,验证了无人机效据链的信道编码与解码方案,使数据链的误码率达到了10-5以下,满足了无人机数据链高可靠性要求.

  10. Sharing code

    OpenAIRE

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  11. Review of Josephson Waveform Synthesis and Possibility of New Operation Method by Multibit Delta-Sigma Modulation and Thermometer Code for Its Further Advancement

    Science.gov (United States)

    Kaneko, Nobu-hisa; Maruyama, Michitaka; Urano, Chiharu; Kiryu, Shogo

    2012-01-01

    A method of AC waveform synthesis with quantum-mechanical accuracy has been developed on the basis of the Josephson effect in national metrology institutes, not only for its scientific interest but its potential benefit to industries. In this paper, we review the development of Josephson arbitrary waveform synthesizers based on the two types of Josephson junction array and their distinctive driving methods. We also discuss a new operation technique with multibit delta-sigma modulation and a thermometer code, which possibly enables the generation of glitch-free waveforms with high voltage levels. A Josephson junction array for this method has equally weighted branches that are operated by thermometer-coded bias current sources with multibit delta-sigma conversion.

  12. Development of Galerkin Finite Element Method Three-dimensional Computational Code for the Multigroup Neutron Diffusion Equation with Unstructured Tetrahedron Elements

    Directory of Open Access Journals (Sweden)

    Seyed Abolfazl Hosseini

    2016-02-01

    Full Text Available In the present paper, development of the three-dimensional (3D computational code based on Galerkin finite element method (GFEM for solving the multigroup forward/adjoint diffusion equation in both rectangular and hexagonal geometries is reported. Linear approximation of shape functions in the GFEM with unstructured tetrahedron elements is used in the calculation. Both criticality and fixed source calculations may be performed using the developed GFEM-3D computational code. An acceptable level of accuracy at a low computational cost is the main advantage of applying the unstructured tetrahedron elements. The unstructured tetrahedron elements generated with Gambit software are used in the GFEM-3D computational code through a developed interface. The forward/adjoint multiplication factor, forward/adjoint flux distribution, and power distribution in the reactor core are calculated using the power iteration method. Criticality calculations are benchmarked against the valid solution of the neutron diffusion equation for International Atomic Energy Agency (IAEA-3D and Water-Water Energetic Reactor (VVER-1000 reactor cores. In addition, validation of the calculations against the P1 approximation of the transport theory is investigated in relation to the liquid metal fast breeder reactor benchmark problem. The neutron fixed source calculations are benchmarked through a comparison with the results obtained from similar computational codes. Finally, an analysis of the sensitivity of calculations to the number of elements is performed.

  13. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    , Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...

  14. Lossless compression of medical images using Hilbert scan

    Science.gov (United States)

    Sun, Ziguang; Li, Chungui; Liu, Hao; Zhang, Zengfang

    2007-12-01

    The effectiveness of Hilbert scan in lossless medical images compression is discussed. In our methods, after coding of intensities, the pixels in a medical images have been decorrelated with differential pulse code modulation, then the error image has been rearranged using Hilbert scan, finally we implement five coding schemes, such as Huffman coding, RLE, lZW coding, Arithmetic coding, and RLE followed by Huffman coding. The experiments show that the case, which applies DPCM followed by Hilbert scan and then compressed by the Arithmetic coding scheme, has the best compression result, also indicate that Hilbert scan can enhance pixel locality, and increase the compression ratio effectively.

  15. Huffman Coding Used in Compression of 1 - bit Code Stream of Beamformer Based on Sigma - delta ADC%Huffman编码用于Sigma-delta ADC波束形成器中1bit码流的压缩

    Institute of Scientific and Technical Information of China (English)

    韩雪梅; 彭虎; 杜宏伟; 陈强; 冯焕清

    2005-01-01

    基于过采样Sigma-delta ADC的波束形成器直接利用过采样Sigma-delta ADC所产生的1bit码流的相位信息进行高质量的聚焦延迟-求和.但此1bit码流速率极高,一般不能直接用USB接口送到计算机进行波束形成等后续处理,须先将其进行无损压缩,即在降低码流速度的同时保留波束形成所需的相位信息.采用Huffman编码方式对高速1bit流进行压缩.结果表明,Huffman编码能实现一半以上的压缩,从而使1bit码流通过USB接口传送成为可能.

  16. 一维修改的哈夫曼码在气象传真图编码中的应用%Application of one-dimensional modified Huffman code in meteorological facsimile chart coding

    Institute of Scientific and Technical Information of China (English)

    刘惠敏; 刘繁明; 张琳琳

    2008-01-01

    气象传真图的信息量非常大.对其进行数据压缩,不仅可以在有限的空间内存储更多的图像,而且可以有效地降低传输时间,对于海上航行的船舶及时地掌握气象信息、降低气象风险大有帮助.在此采用一维修改的Huffman码对气象传真图进行压缩处理,并依据查表法对气象传真图像进行解压处理.实验证明,该方法可以满足气象传真图关于压缩比和压缩速度的要求,该方法是可行的.

  17. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  18. Polar Codes

    Science.gov (United States)

    2014-12-01

    QPSK Gaussian channels . .......................................................................... 39 vi 1. INTRODUCTION Forward error correction (FEC...Capacity of BSC. 7 Figure 5. Capacity of AWGN channel . 8 4. INTRODUCTION TO POLAR CODES Polar codes were introduced by E. Arikan in [1]. This paper...Under authority of C. A. Wilgenbusch, Head ISR Division EXECUTIVE SUMMARY This report describes the results of the project “More reliable wireless

  19. Transplantation Method of QR Code Decoding Program Based on Embedded Platforms%嵌入式平台QR码译码程序的移植方法

    Institute of Scientific and Technical Information of China (English)

    杨柏松; 高美凤

    2016-01-01

    A transplantation method of QR code decoding program based on embedded platform was presented. The UP- NETARM2410-S was selected as the hardware development platform. Firstly, the system hardware composition was given. Then, the whole development process of QR code decoding program using Qt-Creator was introduced in detail. In the test phase, in order to simulate the running state of QR code decoding, the qvfb visual screen was used. Finally, the program was transplanted to true embedded platform. Test results showed that the decoding program can run normally on the embedded platform and correctly decode QR code information. The proposed transplantation method may have certain reference significance for different platforms QR code decoding.%给出一种嵌入式平台QR码译码程序的移植方法.选用UP-NETARM2410-S嵌入式平台作为硬件开发平台,首先介绍了系统的硬件平台的组成,然后,对使用Qt-Creator进行QR码译码程序的开发的具体流程进行详细的介绍,测试阶段,利用qvfb虚拟屏,对程序在开发平台的运行情况进行模拟.最后,将程序向嵌入式平台移植.测试结果表明,译码程序能够在嵌入式平台正常运行,并能进行QR码的译码.所提出的移植方法,对于不同平台QR码译码程序移植有一定的借鉴意义.

  20. Authorship Attribution of Source Code

    Science.gov (United States)

    Tennyson, Matthew F.

    2013-01-01

    Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…

  1. N-Square Approach for the Erection of Redundancy Codes

    Directory of Open Access Journals (Sweden)

    G. Srinivas,

    2010-04-01

    Full Text Available This paper addresses the area of data compression which is an application of image processing. There are several lossy and lossless coding techniques developed all through the last two decades. Although very high compression can be achieved with lossy compression techniques, they are deficient in obtaining the original image. While lossless compression technique recovers the image exactly. In applications related to medical imaging lossless techniques are required, as the loss of information is deplorable. The objective of image compression is to symbolize an image with a handful number of bits as possible while preserving the quality required for the given application. In this paper we are introducing a new lossless compression technique which even better reduces the entropy there by reducing the average number of bits with the utility of Non BinaryHuffman coding through the use of N-Square approach. Our extensive experimental results demonstrate that the proposed scheme is very competitive and this addresses the limitations of D value in the existing system by proposing a pattern called N-Square approach for it. The newly proposed algorithm provides a good means for lossless image compression.

  2. WSN中降低喷泉码存储冗余量的方法研究%Research on Storage Redundancy Reduction Method of Fountain Code in WSN

    Institute of Scientific and Technical Information of China (English)

    袁博; 赵旦峰; 钱晋希

    2014-01-01

    针对由于数字喷泉码的冗余编码数据包和所需内存空间较大,导致无线传感器网络(WSN)实时性较差的问题,设计一种平均分帧长LT码的编译码系统。建立典型拓扑结构模型,应用网络编码和数字喷泉码的级联形式进行数据传输,并对平均分帧长LT码的生成矩阵进行压缩编码。通过加权平均法和多比特打包法,在不破坏喷泉码特性的前提下降低无线整个传感器网络的存储冗余量。实验结果表明,该系统能使数字喷泉码降低103量级的存储冗余量,并提高WSN编译码效率及数据中心的数据恢复率。%For the problems that the redundant encoded data packets of fountain code are big and require large memory space, resulting in poor real-time Wireless Sensor Network(WSN) problems. A system of average framing length of Luby Transform(LT) codes split encoding and decoding is designed. The typical topology model is built, the cascade form of the network coding and fountain codes in data transmission is applied, and the improvement coding compression algorithm in the average framing length LT code generator matrix is introduced. The weighted average method and the multi-bit packaging method are introduced in the hierarchy of WSN, which greatly reduces the amount of storage redundancy without damaging the characteristic of fountain codes. Experimental results show that the system makes the reduction amount of the compression ratio of the storage redundancy in the WSN to 103, promotes the encoding rate and decoding rate in the WSN and improves the recovery rate of the data center.

  3. 基于元素区间编码的GML数据索引方法%GML data index method based on element interval coding

    Institute of Scientific and Technical Information of China (English)

    於时才; 郭润牛; 吴衍智

    2013-01-01

    According to the demand of data query of GML,a GML indexing method was proposed based on extending the element interval coding,and analyzing the XML file coding techniques and spatial indexing method.Firstly through extending the interval coding method to encode the element,attribute,text,and geometric object in GML file.Then the non-spatial nodes,spatial nodes,and element nodes were separated from GML file tree to generate sequence of element coding based on element coding algorithm.On this basis and according to the difference among the nodes,a B+ tree index was built up for attribute and text notes to realize value query and a R tree index was built up for on geometric object note to realize spatial data analysis,and by means of query optimization algorithm the unnecessary overall query of the nodes was avoided,so that the query efficiency was further improved.Experimental result showed that the indexing method based on the element interval coding was feasible and high-efficient.%根据GML数据查询的需要,在分析XML文档编码和空间索引技术的基础上,提出一种基于扩展的元素区间编码的GML索引方法.首先通过扩展的区间编码方法对GML文档中的元素、属性、文本、几何体等要素进行编码;其次依据元素编码算法并将非空间节点、空间节点、元素节点从GML文档树中分离,产生元素编码序列;在此基础上根据节点类型的不同对属性和文本节点建立B+树索引以实现值查询,对几何体节点建立R树索引以实现空间数据的分析操作,并在查询处理时通过查询优化算法避免不必要的节点的遍历,进一步提高查询效率.实验结果表明,基于元素区间编码的GML数据索引方法是可行的、高效的.

  4. Assessment of shielding analysis methods, codes, and data for spent fuel transport/storage applications. [Radiation dose rates from shielded spent fuels and high-level radioactive waste

    Energy Technology Data Exchange (ETDEWEB)

    Parks, C.V.; Broadhead, B.L.; Hermann, O.W.; Tang, J.S.; Cramer, S.N.; Gauthey, J.C.; Kirk, B.L.; Roussin, R.W.

    1988-07-01

    This report provides a preliminary assessment of the computational tools and existing methods used to obtain radiation dose rates from shielded spent nuclear fuel and high-level radioactive waste (HLW). Particular emphasis is placed on analysis tools and techniques applicable to facilities/equipment designed for the transport or storage of spent nuclear fuel or HLW. Applications to cask transport, storage, and facility handling are considered. The report reviews the analytic techniques for generating appropriate radiation sources, evaluating the radiation transport through the shield, and calculating the dose at a desired point or surface exterior to the shield. Discrete ordinates, Monte Carlo, and point kernel methods for evaluating radiation transport are reviewed, along with existing codes and data that utilize these methods. A literature survey was employed to select a cadre of codes and data libraries to be reviewed. The selection process was based on specific criteria presented in the report. Separate summaries were written for several codes (or family of codes) that provided information on the method of solution, limitations and advantages, availability, data access, ease of use, and known accuracy. For each data library, the summary covers the source of the data, applicability of these data, and known verification efforts. Finally, the report discusses the overall status of spent fuel shielding analysis techniques and attempts to illustrate areas where inaccuracy and/or uncertainty exist. The report notes the advantages and limitations of several analysis procedures and illustrates the importance of using adequate cross-section data sets. Additional work is recommended to enable final selection/validation of analysis tools that will best meet the US Department of Energy's requirements for use in developing a viable HLW management system. 188 refs., 16 figs., 27 tabs.

  5. Continuous-Energy Adjoint Flux and Perturbation Calculation using the Iterated Fission Probability Method in Monte Carlo Code TRIPOLI-4® and Underlying Applications

    Science.gov (United States)

    Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.; Malvagi, F.

    2014-06-01

    Pile-oscillation experiments are performed in the MINERVE reactor at the CEA Cadarache to improve nuclear data accuracy. In order to precisely calculate small reactivity variations (experiments, a reference calculation need to be achieved. This calculation may be accomplished using the continuous-energy Monte Carlo code TRIPOLI-4® by using the eigenvalue difference method. This "direct" method has shown limitations in the evaluation of very small reactivity effects because it needs to reach a very small variance associated to the reactivity in both states. To answer this problem, it has been decided to implement the exact perturbation theory in TRIPOLI-4® and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4® is described. To illustrate the effciency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the "direct" estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the "direct" method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high precision. Other applications of

  6. Decoding Xing-Ling codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2002-01-01

    This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....

  7. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  8. Investigation of behavior of scintillator detector of Alborz observatory array using Monte Carlo method with Geant4 code

    Directory of Open Access Journals (Sweden)

    M. Abbasian Motlagh

    2014-04-01

    Full Text Available For their appropriate temporal resolution, scintillator detectors are used in the Alborz observatory. In this work, the behavior of the scintillation detectors for the passage of electrons with different energies and directions were studied using the simulation code GEANT4. Pulse shapes of scintillation light, and such characteristics as the total number of photons, the rise time and the falling time for the optical pulses were computed for the passage of electrons with energies of 10, 100 and 1000 MeV. Variations of the characteristics of optical pulse of scintillation with incident angle and the location of electrons were also investigated

  9. QR code sampling method based on adaptive match%基于自适应匹配的QR码取样方法

    Institute of Scientific and Technical Information of China (English)

    宋贤媛; 张多英

    2015-01-01

    The QR code acquired by camera always comes with some distortion, so it needs to be recognised to the standard QR code before decode. Aimming at the QR coderecognition, distortion and correction is analyzed and studied in this paper. Some inevitable distortion still existed based on the tilt correction and geometric correction;the traditional method can’t sample the QR code accurately. According to the problem, this paper proposes the adaptive match method, acquire the effective sampling region of QRcode by the matching rate of two adjacent pixel row(column). Experiment shows that the method is real-time with good stability, it can sampling the QR code fast and accurately.%通常由相机获取的QR码图像都带有一些失真,所以在译码前需要对获取的QR码图像进行识别以得到标准规格的QR码。针对QR码识别中的失真和校正进行了分析研究,解决了某些QR码经过倾斜校正和几何校正后仍存在一些无法避免的失真而无法被传统方法准确取样的问题,提出了一种自适应匹配取样法,根据相邻行(列)像素的匹配度准确获取QR码的模块有效取样区域。实验证明该方法稳定性好,能够快速准确地对QR码进行取样。

  10. Long-code Signal Waveform Monitoring Method for Navigation Satellites%卫星导航长码信号波形监测方法

    Institute of Scientific and Technical Information of China (English)

    刘建成; 王宇; 宫磊; 徐晓燕

    2016-01-01

    Due to the weakness of signal,signal waveform monitoring for navigation satellites in orbit is one of the difficulties in satellite navigation signal quality monitoring research,so a signal waveform monitoring method for navigation satellites in orbit is pro⁃posed.Based on the Vernier sampling principle,a large⁃diameter parabolic antenna is used for in⁃orbit satellite signal collection.After in⁃itial phase and residual frequency elimination,accumulation and combination,a clear chip waveform is obtained.For civilian and long⁃code signals with the same code rate,the PN code phase bias can be determined.By using a large⁃diameter parabolic antenna for COM⁃PASS satellite tracking,the civilian and long⁃code chip waveforms of several COMPASS satellites in B1 band are obtained,and the PN code phase bias of the satellite signals are got.The results show that there is little difference between the civilian signal waveform and long⁃code signal waveform,but there is a code phase bias between them.%由于信号微弱,如何获得在轨导航卫星的清晰信号波形是卫星导航信号质量监测研究中的难点之一,为此提出了一种在轨导航卫星的信号波形监测方法。该方法基于Vernier采样原理,利用大口径抛物面天线对在轨卫星进行信号采集,经过消除初相和残余频率、累加平均和数据组合等处理,获得清晰的码片波形。对于相同码速率的民用信号和长码信号,可确定民用信号和长码信号的伪码相位偏差。利用大口径抛物面天线对北斗卫星进行跟踪,获得了多颗北斗卫星B1频点民用信号和长码信号的码片波形。结果表明,民用信号和长码信号的码片波形的轮廓差异较小,但伪码相位存在偏差。

  11. Coding and Decoding Method for Periodic Permutation Color Structured Light%周期组合颜色结构光编解码方法

    Institute of Scientific and Technical Information of China (English)

    秦绪佳; 马吉跃; 张勤锋; 郑红波; 徐晓刚

    2014-01-01

    A periodic permutation color structured light coding and decoding method is presented.The method use red,green and blue three primary colors as the encoding stripe pattern,and make any adjacent three color stripes as a group.So the stripe's order is unique.Then use white stripes to mark the periodic color stripe patterns to distinguish different coding groups.This method can achieve a larger coding space with less colors,increase the noise immunity and make the decoding easier.In order to accurately decode,an adaptive color stripes segmentation method based on improved Canny edge-detection operator is presented.It includes two aspects:(1) sequentially decoding the color stripes based on the white stripes; (2) omitted color stripes decoding.Experimental results show that the method has a large coding periodic space,and can extract stripes easily.It can also ensure the accuracy of the stripes decoding,and achieve a good coding and decoding result.%提出一种周期组合颜色编解码方法.采用红、绿、蓝三种基本色形成彩色条纹,将任意相邻三条彩色条纹作为一组,其排列顺序是唯一的,再利用白色条纹来标记周期编号,该编码方法用较少的颜色数实现了较大的编码空间,增加了抗干扰性,且解码较容易.为精确解码,本文对彩色条纹分割进行了研究,提出基于改进Canny边缘检测算子的自适应彩色条纹分割算法.在此基础之上,对彩色条纹进行解码,主要包括:①基于白色条纹逐级解码算法;②遗漏彩色条纹解码算法.实验结果表明,该方法既具有较大的编码周期,又容易提取条纹,保证了条纹解码的准确度,达到了较好的结果.

  12. Quantum codes from linear codes over finite chain rings

    Science.gov (United States)

    Liu, Xiusheng; Liu, Hualu

    2017-10-01

    In this paper, we provide two methods of constructing quantum codes from linear codes over finite chain rings. The first one is derived from the Calderbank-Shor-Steane (CSS) construction applied to self-dual codes over finite chain rings. The second construction is derived from the CSS construction applied to Gray images of the linear codes over finite chain ring {\\mathbb {F}}_{p^{2m}}+u{\\mathbb {F}}_{p^{2m}}. The good parameters of quantum codes from cyclic codes over finite chain rings are obtained.

  13. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  14. HEFF---A user`s manual and guide for the HEFF code for thermal-mechanical analysis using the boundary-element method; Version 4.1: Yucca Mountain Site Characterization Project

    Energy Technology Data Exchange (ETDEWEB)

    St. John, C.M.; Sanjeevan, K. [Agapito (J.F.T.) and Associates, Inc., Grand Junction, CO (United States)

    1991-12-01

    The HEFF Code combines a simple boundary-element method of stress analysis with the closed form solutions for constant or exponentially decaying heat sources in an infinite elastic body to obtain an approximate method for analysis of underground excavations in a rock mass with heat generation. This manual describes the theoretical basis for the code, the code structure, model preparation, and step taken to assure that the code correctly performs its intended functions. The material contained within the report addresses the Software Quality Assurance Requirements for the Yucca Mountain Site Characterization Project. 13 refs., 26 figs., 14 tabs.

  15. Measurement Method of Source Code Similarity Based on Word%基于单词的源程序相似度度量方法

    Institute of Scientific and Technical Information of China (English)

    朱红梅; 孙未; 王鲁; 张亮

    2014-01-01

    为了帮助教师快速准确地识别程序设计类作业中的抄袭现象,本文研究了一种源程序相似度度量方法,根据学生提交的源程序,基于单词统计程序源代码之间的编辑距离和最长公共子序列的长度,计算程序对之间的相似度,通过设定合理的动态阈值,判断源程序对之间是否存在抄袭。实验结果表明,该方法能够及时有效和准确地识别学生提交的相似源程序。%In order to help teachers to identify quickly and accurately the plagiarism among students' source codes, this paper works out a method of measuring the similarity of source codes. Based on editing distance be-tween words and the length of longest common subsequence, we calculate the similarity of the source programs submitted by students, and by setting a reasonable dynamic sensory threshold, we determine whether there is pla-giarism. Experimental results show that this method can identify effectively and accurately similar source codes.

  16. Several ordinary optimizing methods for MATLAB code%基于MATLAB的几种常用代码优化方法

    Institute of Scientific and Technical Information of China (English)

    程宏辉; 刘红飞; 王佳; 孙玉晨; 黄新; 秦康生

    2011-01-01

    虽然MATLAB软件提供了大量专业化的工具箱,但是用户仍不免需要经常自行编程来解决某些实际工程问题.因此,如何根据该软件的自身特点来优化程序代码备受关注.阐述了关于MATLAB的几种常用代码优化方法.这些方法已经过长期实践检验,结果表明具有简单易行,操作性强的特点,对代码执行速度的提高具有良好效果.%Although MATLAB software provides a number of professional toolboxes, users still unavoidablly regular program on their own to resolve some practical engineering problems. Therefore, how to optimize the codes according to the inherent characteristics of the software catch our attention. In this paper, several ordinary optimizing methods for MATLAB code were described. In long - term engineering applications, it has been proven that these methods, with a simple and operable feature, could improve the code execution speed efficiently.

  17. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    Science.gov (United States)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  18. Combustion chamber analysis code

    Science.gov (United States)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-05-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  19. Code Flows : Visualizing Structural Evolution of Source Code

    NARCIS (Netherlands)

    Telea, Alexandru; Auber, David

    2008-01-01

    Understanding detailed changes done to source code is of great importance in software maintenance. We present Code Flows, a method to visualize the evolution of source code geared to the understanding of fine and mid-level scale changes across several file versions. We enhance an existing visual met

  20. Code flows : Visualizing structural evolution of source code

    NARCIS (Netherlands)

    Telea, Alexandru; Auber, David

    2008-01-01

    Understanding detailed changes done to source code is of great importance in software maintenance. We present Code Flows, a method to visualize the evolution of source code geared to the understanding of fine and mid-level scale changes across several file versions. We enhance an existing visual met

  1. A fast tree-based method for estimating column densities in Adaptive Mesh Refinement codes Influence of UV radiation field on the structure of molecular clouds

    CERN Document Server

    Valdivia, Valeska

    2014-01-01

    Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims. Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods. We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results. We find that the accuracy for the extinction of the tree-based method is better than 10%, while the ...

  2. Modeling and commissioning of a Clinac 600 CD by Monte Carlo method using the BEAMnrc and DOSXYZnrc codes

    Energy Technology Data Exchange (ETDEWEB)

    Junior, Reginaldo G., E-mail: reginaldo.junior@ifmg.edu.br [Instituto Federal de Minas Gerais (IFMG), Formiga, MG (Brazil). Departamento de Engenharia Eletrica; Oliveira, Arno H. de; Sousa, Romulo V., E-mail: arnoheeren@gmail.com, E-mail: romuloverdolin@yahoo.com.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear; Mourao, Arnaldo P., E-mail: apratabhz@gmail.com [Centro Federal de Educacao Tecnologica de Minas Gerais, Belo Horizonte, MG (Brazil)

    2015-07-01

    This paper reports the modeling of a linear accelerator Clinac 600 CD with BEAMnrc application, derived from EGSnrc radiation transport code, indicating relevant details of modeling that traditionally involve difficulties imposed on the process. This accelerator was commissioned by the confrontation of experimental dosimetric data with the computer data obtained by DOSXYZnrc application. The information compared in dosimetry process were: field profiles and dose percentage curves obtained in a water phantom with cubic edge of 30 cm. In all comparisons made, the computational data showed satisfactory precision and discrepancies with the experimental data did not exceed 3%, proving the electiveness of the model. Both the accelerator model and the computational dosimetry methodology, revealed the need for adjustments that probably will allow obtaining more accurate data than those obtained in the simulations presented here. These adjustments are mainly associated to improve the resolution of the eld profiles, the voxelization in phantom and optimization of computing time. (author)

  3. Looking back on 10 years of the ATLAS Metadata Interface. Reflections on architecture, code design and development methods.

    CERN Document Server

    Fulachier, J; The ATLAS collaboration; Albrand, S; Lambert, F

    2014-01-01

    The “ATLAS Metadata Interface” framework (AMI) has been developed in the context of ATLAS, one of the largest scientific collaborations. AMI can be considered to be a mature application, since its basic architecture has been maintained for over 10 years. In this paper we will briefly describe the architecture and the main uses of the framework within the experiment (TagCollector for release management and Dataset Discovery). These two applications, which share almost 2000 registered users, are superficially quite different, however much of the code is shared and they have been developed and maintained over a decade almost completely by the same team of 3 people. We will discuss how the architectural principles established at the beginning of the project have allowed us to continue both to integrate the new technologies and to respond to the new metadata use cases which inevitably appear over such a time period.

  4. An observational method to code concussions in the National Hockey League (NHL): the heads-up checklist.

    Science.gov (United States)

    Hutchison, Michael G; Comper, Paul; Meeuwisse, Willem H; Echemendia, Ruben J

    2014-01-01

    Development of effective strategies for preventing concussions is a priority in all sports, including ice hockey. Digital video records of sports events contain a rich source of valuable information, and are therefore a promising resource for analysing situational factors and injury mechanisms related to concussion. To determine whether independent raters reliably agreed on the antecedent events and mechanisms of injury when using a standardised observational tool known as the heads-up checklist (HUC) to code digital video records of concussions in the National Hockey League (NHL). The study occurred in two phases. In phase 1, four raters (2 naïve and 2 expert) independently viewed and completed HUCs for 25 video records of NHL concussions randomly chosen from the pool of concussion events from the 2006-2007 regular season. Following initial analysis, three additional factors were added to the HUC, resulting in a total of 17 factors of interest. Two expert raters then viewed the remaining concussion events from the 2006-2007 season, as well as all digital video records of concussion events up to 31 December 2009 (n=174). For phase 1, the majority of the factors had a κ value of 0.6 or higher (8 of 15 factors for naïve raters; 11 of 15 factors for expert raters). For phase 2, all the factors had a total percent agreement value greater than 0.8 and κ values of >0.65 for the expert raters. HUC is an objective, reliable tool for coding the antecedent events and mechanisms of concussions in the NHL.

  5. DCT Transform Domain Filtering Code Acquisition Method%DCT变换域滤波码捕获方法

    Institute of Scientific and Technical Information of China (English)

    李小捷; 许录平

    2012-01-01

    Focusing on the satellite signal acquisition with low tune and frequency uncertainty, we propose a novel code acquisition algorithm based on discrete cosine transform (DCT). Firstly, we obtained a set of time-domain related vectors by partial matched filter (IMF). Then we performed the transform domain filtering and signal reconstruction for every candidate code phase. Because the signals and noise produced by PMF have different time-varying property, noise is greatly reduced and the signals nave almost no loss, thereby increasing the probability of detection under the same probability of false alarm. The theoretical analysis and simulation results show that the detection algorithm can effectively improve the detection probability,and has a lower complexity.%针对较小时频不确定度的卫星信号捕获,提出了一种结合离散余弦变换(DCT)的码捕获算法.首先对信号进行部分匹配滤波(PMF),然后对各个码相位对应的PMF输出矢量进行DCT变换域滤波及信号重构,最后对信号进行基于能量的检测.由于PMF输出信号和噪声时变特性不同,滤波重构后信号能量几乎无损,而噪声能量得到了明显降低,从而提高了相同虚警概率下的捕获概率.理论分析和仿真结果表明本文检测算法可以有效提升检测概率,并且具有较低的复杂度.

  6. Fracture Analysis of Vessels. Oak Ridge FAVOR, v06.1, Computer Code: Theory and Implementation of Algorithms, Methods, and Correlations

    Energy Technology Data Exchange (ETDEWEB)

    Williams, P. T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dickson, T. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Yin, S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2007-12-01

    The current regulations to insure that nuclear reactor pressure vessels (RPVs) maintain their structural integrity when subjected to transients such as pressurized thermal shock (PTS) events were derived from computational models developed in the early-to-mid 1980s. Since that time, advancements and refinements in relevant technologies that impact RPV integrity assessment have led to an effort by the NRC to re-evaluate its PTS regulations. Updated computational methodologies have been developed through interactions between experts in the relevant disciplines of thermal hydraulics, probabilistic risk assessment, materials embrittlement, fracture mechanics, and inspection (flaw characterization). Contributors to the development of these methodologies include the NRC staff, their contractors, and representatives from the nuclear industry. These updated methodologies have been integrated into the Fracture Analysis of Vessels -- Oak Ridge (FAVOR, v06.1) computer code developed for the NRC by the Heavy Section Steel Technology (HSST) program at Oak Ridge National Laboratory (ORNL). The FAVOR, v04.1, code represents the baseline NRC-selected applications tool for re-assessing the current PTS regulations. This report is intended to document the technical bases for the assumptions, algorithms, methods, and correlations employed in the development of the FAVOR, v06.1, code.

  7. Fast coeff_token decoding method and new memory architecture design for an efficient H.264/AVC context-based adaptive variable length coding decoder

    Science.gov (United States)

    Moon, Yong Ho; Yoon, Kun Su; Ha, Seok Wun

    2009-12-01

    A fast coeff_token decoding method based on new memory architecture is proposed to implement an efficient context-based adaptive variable length-coding (CAVLC) decoder. The heavy memory access needed in CAVLC decoding is a significant issue in designing a real system, such as digital multimedia broadcasting players, portable media players, and mobile phones with video, because it results in high power consumption and delay in operations. Recently, a new coeff_token variable-length decoding method has been suggested to achieve memory access reduction. However, it still requires a large portion of the total memory access in CAVLC decoding. In this work, an effective memory architecture is designed through careful examination of codewords in variable-length code tables. In addition, a novel fast decoding method is proposed to further reduce the memory accesses required for reconstructing the coeff_token element. Only one memory access is used for reconstructing each coeff_token element in the proposed method.

  8. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Science.gov (United States)

    Rohée, E.; Coulon, R.; Carrel, F.; Dautremer, T.; Barat, E.; Montagu, T.; Normand, S.; Jammes, C.

    2016-11-01

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on "iterative peak fitting deconvolution" method and a "nonparametric Bayesian deconvolution" approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  9. Implementation of a flexible and scalable particle-in-cell method for massively parallel computations in the mantle convection code ASPECT

    Science.gov (United States)

    Gassmöller, Rene; Bangerth, Wolfgang

    2016-04-01

    Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a

  10. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    ; alternatives to mainstream development, from performances of the live-coding scene to the organizational forms of commons-based peer production; the democratic promise of social media and their paradoxical role in suppressing political expression; and the market’s emptying out of possibilities for free...... development, Speaking Code unfolds an argument to undermine the distinctions between criticism and practice, and to emphasize the aesthetic and political aspects of software studies. Not reducible to its functional aspects, program code mirrors the instability inherent in the relationship of speech...... expression in the public realm. The book’s line of argument defends language against its invasion by economics, arguing that speech continues to underscore the human condition, however paradoxical this may seem in an era of pervasive computing....

  11. A Fast and Effective Localization Method of Quick Response Code%一种快速有效的QR码定位方法

    Institute of Scientific and Technical Information of China (English)

    王景中; 贺磊

    2015-01-01

    为解决在复杂背景下,由于QR码无法定位而导致的识别率较低的问题,提出了一种新的QR条码定位方法。考虑到QR码的结构特征,先对QR码进行轮廓定位,确定QR码可能所在的区域,然后对QR码进行精确定位。 QR码轮廓定位是用Hough变换检测近似正方形的区域,然后合并嵌套的正方形区域,最后进行区域调整。精确定位的过程利用了KMP算法的思想,提高了寻找满足特定比例线段的速度,从而提高了精确定位的速度。实验结果表明,相比于传统的QR码定位的方法,该方法可以准确快速地定位QR条码,整体的识别速度和识别率都有了较大的提高,同时具有很高的实用价值。%To solve the low recognition rate of QR code under complex background caused by the invalid localization,propose a new ap-proach for QR code localization. Taking the structure of QR code into account,the first step is contour localization that determines the possible regions of QR code and the second step is accurate localization. Contour localization applies Hough transform to detect regions approximate to square,then merge those squares which are nested and made region adjustment at last. The thought of KMP algorithm is used in the process of accurate localization to enhance the speed of finding the special ratio line,improving the speed of localization. The results of experiments show that this method is able to locate the QR code fast and precisely and the speed of recognition as well as recog-nition rate are greatly improved compared with the conventional method and has high practical value as well.

  12. Statistical methods for the analysis of safety margins by BE + U codes; Metodos estasticos para el analisis de margenes de seguridad mediante codigos BE+U

    Energy Technology Data Exchange (ETDEWEB)

    Villamizar, M.; Martorell, S.; Villanueva, J. F.; Carlos, S.; Sanchez, A.; Pelayo, F.; Mendizabal, R.; Sol, I.

    2012-11-01

    Statistical methods for the analysis of safety margins through BE+U codes: This paper presents tools for statistical analysis (PLS, PCS,Variance Decomposition) to understand the relationships between input variables (defined by parameters of the model thermal-hydraulics distribution functions) and output variable, e. g. the PCT variable. The objective is to identify the most important input variables in order to the effect on the output variables. In addition, it is possible to quantify the contribution of the uncertainty of each input variable in the uncertainly of results. the case of application develops a Large Break LOCA in PWR. (Author) 16 refs.

  13. MO-F-CAMPUS-I-04: Characterization of Fan Beam Coded Aperture Coherent Scatter Spectral Imaging Methods for Differentiation of Normal and Neoplastic Breast Structures

    Energy Technology Data Exchange (ETDEWEB)

    Morris, R; Albanese, K; Lakshmanan, M; Greenberg, J; Kapadia, A [Duke University Medical Center, Durham, NC, Carl E Ravin Advanced Imaging Laboratories, Durham, NC (United States)

    2015-06-15

    Purpose: This study intends to characterize the spectral and spatial resolution limits of various fan beam geometries for differentiation of normal and neoplastic breast structures via coded aperture coherent scatter spectral imaging techniques. In previous studies, pencil beam raster scanning methods using coherent scatter computed tomography and selected volume tomography have yielded excellent results for tumor discrimination. However, these methods don’t readily conform to clinical constraints; primarily prolonged scan times and excessive dose to the patient. Here, we refine a fan beam coded aperture coherent scatter imaging system to characterize the tradeoffs between dose, scan time and image quality for breast tumor discrimination. Methods: An X-ray tube (125kVp, 400mAs) illuminated the sample with collimated fan beams of varying widths (3mm to 25mm). Scatter data was collected via two linear-array energy-sensitive detectors oriented parallel and perpendicular to the beam plane. An iterative reconstruction algorithm yields images of the sample’s spatial distribution and respective spectral data for each location. To model in-vivo tumor analysis, surgically resected breast tumor samples were used in conjunction with lard, which has a form factor comparable to adipose (fat). Results: Quantitative analysis with current setup geometry indicated optimal performance for beams up to 10mm wide, with wider beams producing poorer spatial resolution. Scan time for a fixed volume was reduced by a factor of 6 when scanned with a 10mm fan beam compared to a 1.5mm pencil beam. Conclusion: The study demonstrates the utility of fan beam coherent scatter spectral imaging for differentiation of normal and neoplastic breast tissues has successfully reduced dose and scan times whilst sufficiently preserving spectral and spatial resolution. Future work to alter the coded aperture and detector geometries could potentially allow the use of even wider fans, thereby making coded

  14. Discrete probability models and methods probability on graphs and trees, Markov chains and random fields, entropy and coding

    CERN Document Server

    Brémaud, Pierre

    2017-01-01

    The emphasis in this book is placed on general models (Markov chains, random fields, random graphs), universal methods (the probabilistic method, the coupling method, the Stein-Chen method, martingale methods, the method of types) and versatile tools (Chernoff's bound, Hoeffding's inequality, Holley's inequality) whose domain of application extends far beyond the present text. Although the examples treated in the book relate to the possible applications, in the communication and computing sciences, in operations research and in physics, this book is in the first instance concerned with theory. The level of the book is that of a beginning graduate course. It is self-contained, the prerequisites consisting merely of basic calculus (series) and basic linear algebra (matrices). The reader is not assumed to be trained in probability since the first chapters give in considerable detail the background necessary to understand the rest of the book. .

  15. Different imaging methods in the comparative assessment of vascular lesions: color-coded duplex sonography, laser Doppler perfusion imaging, and infrared thermography

    Science.gov (United States)

    Urban, Peter; Philipp, Carsten M.; Weinberg, Lutz; Berlien, Hans-Peter

    1997-12-01

    Aim of the study was the comparative investigation of cutaneous and subcutaneous vascular lesions. By means of color coded duplex sonography (CCDS), laser doppler perfusion imaging (LDPI) and infrared thermography (IT) we examined hemangiomas, vascular malformations and portwine stains to get some evidence about depth, perfusion and vascularity. LDI is a helpful method to get an impression of the capillary part of vascular lesions and the course of superficial vessels. CCDS has disadvantages in the superficial perfusion's detection but connections to deeper vascularizations can be examined precisely, in some cases it is the only method for visualizing vascular malformations. IT gives additive hints on low blood flow areas or indicates arterial-venous-shunts. Only the combination of all imaging methods allows a complete assessment, not only for planning but also for controlling the laser treatment of vascular lesions.

  16. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe...... the codes succinctly using Gröbner bases....

  17. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  18. Dyadic wavelet for image coding implementation on a Xilinx MicroBlaze processor: application to neutron radiography.

    Science.gov (United States)

    Saadi, Slami; Touiza, Maamar; Kharfi, Fayçal; Guessoum, Abderrezak

    2013-12-01

    In this work, we present a mixed software/hardware implementation of 2-D signals encoder/decoder using dyadic discrete wavelet transform (DWT) based on quadrature mirror filters (QMF); using fast wavelet Mallat's algorithm. This work is designed and compiled on the embedded development kit EDK6.3i, and the synthesis software, ISE6.3i, which is available with Xilinx Virtex-IIV2MB1000 FPGA. Huffman coding scheme is used to encode the wavelet coefficients so that they can be transmitted progressively through an Ethernet TCP/IP based connection. The possible reconfiguration can be exploited to attain higher performance. The design will be integrated with the neutron radiography system that is used with the Es-Salem research reactor.

  19. A highly optimized code for calculating atomic data at neutron star magnetic field strengths using a doubly self-consistent Hartree-Fock-Roothaan method

    Science.gov (United States)

    Schimeczek, C.; Engel, D.; Wunner, G.

    2014-05-01

    account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78, 032515 (2008)].

  20. A method of improving the security of QR code%一种提高QR码安全性的方法

    Institute of Scientific and Technical Information of China (English)

    张雅奇; 张定会; 江平

    2012-01-01

    The QR code has many advantages. With the widely application of the QR code, the decoding tools were developed rapidly. The security has been concerned. In this paper, a method that the sensitive information of QR code is encrypted with SHA-1 is put forward. Then, the sensitive information is replaced widi its message digest. The new QR code is consisted of the message digest of the sensitive information and the remained non-sensitive message in the original QR code. The attackers can hardly get the sensitive information dirough the decoding tools. Even if the encrypted information of the sensitive information is intercepted, decoding it is infeasible in the calculation for the good nature of oneway of SHA-1.%QR码凭借诸多优势得以广泛应用的同时,QR码解码工具也迅速发展,随之而来的QR码的信息安全问题也备受关注.文中提出了一种用哈希函数SHA-1对QR码的部分敏感信息进行加密,用加密生成的摘要信息替换原始QR码中的敏感信息,用敏感信息的摘要信息和原始QR码中的非敏感信息重新生成新的QR码.用新的QR码替换原始QR码,这样攻击者就无法通过解码工具来直接获取原始QR码中的敏感信息.攻击者即使获得了原QR码中敏感信息的摘要信息,由于SHA-1良好的单向性等性质,要求出其对应的原始敏感信息至少在计算上也是不可行的.

  1. Summary of the Models and Methods for the FEHM Application-A Finite-Element Heat- and Mass-Transfer Code

    Energy Technology Data Exchange (ETDEWEB)

    George A. Zyvoloski; Bruce A. Robinson; Zora V. Dash; Lynn L. Trease

    1997-07-01

    The mathematical models and numerical methods employed by the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multi-component flow in porous media, are described. The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The component models of FEHM are discussed. The first major component, Flow- and Energy-Transport Equations, deals with heat conduction; heat and mass transfer with pressure- and temperature-dependent properties, relative permeabilities and capillary pressures; isothermal air-water transport; and heat and mass transfer with noncondensible gas. The second component, Dual-Porosity and Double-Porosity/Double-Permeability Formulation, is designed for problems dominated by fracture flow. Another component, The Solute-Transport Models, includes both a reactive-transport model that simulates transport of multiple solutes with chemical reaction and a particle-tracking model. Finally, the component, Constitutive Relationships, deals with pressure- and temperature-dependent fluid/air/gas properties, relative permeabilities and capillary pressures, stress dependencies, and reactive and sorbing solutes. Each of these components is discussed in detail, including purpose, assumptions and limitations, derivation, applications, numerical method type, derivation of numerical model, location in the FEHM code flow, numerical stability and accuracy, and alternative approaches to modeling the component.

  2. Method of Turbo Code Based on 3GPP Wireless Standards%一种3GPP标准下的TURBO编解码实现方案

    Institute of Scientific and Technical Information of China (English)

    邓恰; 王云飞

    2011-01-01

    Turbo码以其优异的纠错性能被广泛应用于宽带无线通信领域,文章针对3GPP标准中制定的Turbo编解码方案,提出了一种使用TMS320C6416T芯片进行Turbo编解码操作的实现方案,文中重点介绍了TCP协处理器详细配置步骤,并从时间和性能两个方面分析了TCP协处理器同MATLAB算法仿真性能之间的差异。%Turbo code is widely applied in wireless communication fields for its excellent performance in correction. In this paper, a design method of Turbo code using TMS320C6416T is discussed, which is based on 3GPP wireless standards. The paper also gives exactly the method of how to configure TCP and compares the performance between TCP and algorithm simulation.

  3. On constructing disjoint linear codes

    Institute of Scientific and Technical Information of China (English)

    ZHANG Weiguo; CAI Mian; XIAO Guozhen

    2007-01-01

    To produce a highly nonlinear resilient function,the disjoint linear codes were originally proposed by Johansson and Pasalic in IEEE Trans.Inform.Theory,2003,49(2):494-501.In this paper,an effective method for finding a set of such disjoint linear codes is presented.When n≥2k,we can find a set of[n,k] disjoint linear codes with joint linear codes exists with cardinality at least 2.We also describe a result on constructing a set of [n,k] disjoint linear codes with minimum distance at least some fixed positive integer.

  4. Development of a neutronic code broadcasting 2D and 3D stationary by the finite volume method; Desarrollo de un codigo neutronico de difusion 2D y 3D estacionario por el metodo de volumenes finitos

    Energy Technology Data Exchange (ETDEWEB)

    Bernal Garcia, A.

    2014-07-01

    The objective of this work is the development of a modal neutronic code of diffusion in 2D and 3D steady using the finite volume method, from free codes and can be applied to reactors of any geometry. Currently, numerical methods most commonly used in the broadcasting codes provide good results in structured mesh, but its application to non-structured mesh is not easy and may present problems of convergence and stability of the solution. Regarding the non-structured mesh, its use is justified by their easy adaptation to complex geometries and the development of coupled Thermo-hydraulic-neutronic codes, as well as the development of codes fluid dynamic (CFD) that encourage the development of a neutronic code that has the same mesh as the codes of fluid dynamics, which in general tends to be unstructured. On the other hand, refining the mesh and its adaptation to complex geometries is another stimulus of face to learn more about what is happening at the core of the reactor. Finally, the code has been validated with a homogeneous reactor simulation and other heterogeneous for 2D and 3D. (Author)

  5. Prediction Method for Protein Secondary Structure Based on Word Frequency Statistics Coding and SVM%基于词频统计编码和SVM的蛋白质二级结构预测方法

    Institute of Scientific and Technical Information of China (English)

    石陆魁; 刘倩倩; 王靖鑫; 张军

    2014-01-01

    In protein secondary structure prediction, the codes from the existing amino acid coding methods have higher dimension. And these coding methods don’t also use the statistic information in the amino acid sequence. To do that, a new coding method based on word frequency statistics was presented, which counted the frequency of each amino acid emerging in amino acids sequence. A 20 dimensional vector was obtained after coding the amino acid sequence with the new coding method. In contrast to other the coding methods, the codes from the new coding method have lower dimension and fully utilize all information in the amino acid sequence. In experiments, we compared the methods combing different coding methods and SVM with BP neural network. Experiment results show that the method combing word frequency statistics coding method and SVM greatly improve the prediction accuracy of protein secondary structure and is superior to other methods.%在蛋白质二级结构预测中,常用的氨基酸序列编码方法产生的编码除了具有较高的维数外,也没有利用氨基酸序列片段中的统计信息。为此,提出了一种新的氨基酸序列编码方法--基于词频统计的编码方法,该方法统计每个氨基酸在氨基酸序列片段中出现的频率,利用该编码方法对氨基酸序列片段编码后得到一个20维的向量。与其它编码方法相比不但具有较低的维数,而且也充分利用了氨基酸片段内部所有氨基酸对目标氨基酸的影响。在实验中比较了四种编码方法结合支持向量机和BP神经网络的预测结果,实验结果表明,通过结合词频统计编码和支持向量机来预测蛋白质二级结构极大地提高了预测精度,远优于其它方法的预测结果。

  6. System and method for investigating sub-surface features of a rock formation with acoustic sources generating coded signals

    Energy Technology Data Exchange (ETDEWEB)

    Vu, Cung Khac; Nihei, Kurt; Johnson, Paul A; Guyer, Robert; Ten Cate, James A; Le Bas, Pierre-Yves; Larmat, Carene S

    2014-12-30

    A system and a method for investigating rock formations includes generating, by a first acoustic source, a first acoustic signal comprising a first plurality of pulses, each pulse including a first modulated signal at a central frequency; and generating, by a second acoustic source, a second acoustic signal comprising a second plurality of pulses. A receiver arranged within the borehole receives a detected signal including a signal being generated by a non-linear mixing process from the first-and-second acoustic signal in a non-linear mixing zone within the intersection volume. The method also includes-processing the received signal to extract the signal generated by the non-linear mixing process over noise or over signals generated by a linear interaction process, or both.

  7. System and method for investigating sub-surface features of a rock formation with acoustic sources generating coded signals

    Science.gov (United States)

    Vu, Cung Khac; Nihei, Kurt; Johnson, Paul A; Guyer, Robert; Ten Cate, James A; Le Bas, Pierre-Yves; Larmat, Carene S

    2014-12-30

    A system and a method for investigating rock formations includes generating, by a first acoustic source, a first acoustic signal comprising a first plurality of pulses, each pulse including a first modulated signal at a central frequency; and generating, by a second acoustic source, a second acoustic signal comprising a second plurality of pulses. A receiver arranged within the borehole receives a detected signal including a signal being generated by a non-linear mixing process from the first-and-second acoustic signal in a non-linear mixing zone within the intersection volume. The method also includes-processing the received signal to extract the signal generated by the non-linear mixing process over noise or over signals generated by a linear interaction process, or both.

  8. Quality of recording of diabetes in the UK: how does the GP's method of coding clinical data affect incidence estimates? Cross-sectional study using the CPRD database

    Science.gov (United States)

    Tate, A Rosemary; Dungey, Sheena; Glew, Simon; Beloff, Natalia; Williams, Rachael; Williams, Tim

    2017-01-01

    Objective To assess the effect of coding quality on estimates of the incidence of diabetes in the UK between 1995 and 2014. Design A cross-sectional analysis examining diabetes coding from 1995 to 2014 and how the choice of codes (diagnosis codes vs codes which suggest diagnosis) and quality of coding affect estimated incidence. Setting Routine primary care data from 684 practices contributing to the UK Clinical Practice Research Datalink (data contributed from Vision (INPS) practices). Main outcome measure Incidence rates of diabetes and how they are affected by (1) GP coding and (2) excluding ‘poor’ quality practices with at least 10% incident patients inaccurately coded between 2004 and 2014. Results Incidence rates and accuracy of coding varied widely between practices and the trends differed according to selected category of code. If diagnosis codes were used, the incidence of type 2 increased sharply until 2004 (when the UK Quality Outcomes Framework was introduced), and then flattened off, until 2009, after which they decreased. If non-diagnosis codes were included, the numbers continued to increase until 2012. Although coding quality improved over time, 15% of the 666 practices that contributed data between 2004 and 2014 were labelled ‘poor’ quality. When these practices were dropped from the analyses, the downward trend in the incidence of type 2 after 2009 became less marked and incidence rates were higher. Conclusions In contrast to some previous reports, diabetes incidence (based on diagnostic codes) appears not to have increased since 2004 in the UK. Choice of codes can make a significant difference to incidence estimates, as can quality of recording. Codes and data quality should be checked when assessing incidence rates using GP data. PMID:28122831

  9. Establishing and evaluating bar-code technology in blood sampling system: a model based on human centered human-centered design method.

    Science.gov (United States)

    Chou, Shin-Shang; Yan, Hsiu-Fang; Huang, Hsiu-Ya; Tseng, Kuan-Jui; Kuo, Shu-Chen

    2012-01-01

    This study intended to use a human-centered design study method to develop a bar-code technology in blood sampling process. By using the multilevel analysis to gather the information, the bar-code technology has been constructed to identify the patient's identification, simplify the work process, and prevent medical error rates. A Technology Acceptance Model questionnaire was developed to assess the effectiveness of system and the data of patient's identification and sample errors were collected daily. The average scores of 8 items users' perceived ease of use was 25.21(3.72), 9 items users' perceived usefulness was 28.53(5.00), and 14 items task-technology fit was 52.24(7.09), the rate of patient identification error and samples with order cancelled were down to zero, however, new errors were generated after the new system deployed; which were the position of barcode stickers on the sample tubes. Overall, more than half of nurses (62.5%) were willing to use the new system.

  10. 基于N-gram的VB源代码抄袭检测方法%A VB Source Code Plagiarism Detection Method Based on N-gram

    Institute of Scientific and Technical Information of China (English)

    吴斐; 唐雁; 补嘉

    2012-01-01

    为了有效地抑制VB程序代码抄袭现象,提出一个基于N-gram的VB源代码抄袭检测方法,利用N-gram来表示VB代码文件,以提高检测准确率。同时采用基于Fork-Join框架的并行计算技术来提高算法效率。通过与MOSS系统的对比实验,证明基于N-gram的VB源代码抄袭检测方法检测准确率高于MOSS系统,并具有处理大规模数据的能力。%With the rapid development text, the text plagiarism becomes more of information networks and the widespread use of electronic serious. In order to effectively curb the plagiarism phenome- gram to represent the VB source code files to improve the detection accuracy, and using the parallel computing technology based on Fork-Join to improve the efficiency of the algorithm. The experiment results showed our code plagiarism detection method achieves higher accuracy than the MOSS system, and has the ability to handle large-scale data.

  11. MINOS: a nodal method; approximation by mixed dual finite elements in the Cronos code; La methode nodale de Cronos: MINOS, approximation par des elements mixtes duaux

    Energy Technology Data Exchange (ETDEWEB)

    Lautard, J.J.

    1994-05-01

    This paper presents new extension for the mixed dual finite element approximation of the diffusion equation in rectangular geometry. The mixed dual formulation has been extended in order to take into account discontinuity conditions. The iterative method is based on an alternating direction method which uses the current as unknown. This method is fully ``parallelizable`` and has very quick convergence properties. Some results for a 3D calculation on the CRAY computer are presented. (author). 6 refs., 8 figs., 4 tabs.

  12. 3维全电磁粒子软件NEPTUNE中的并行计算方法%Parallelization methods in 3D fully electromagnetic code NEPTUNE

    Institute of Scientific and Technical Information of China (English)

    陈军; 莫则尧; 董烨; 杨温渊; 董志伟

    2011-01-01

    NEPTUNE is a three-dimensional fully parallel electromagnetic code to solve electromagnetic problem in high power microwaveC HPM) devices with complex geometry. This paper introduces the following three parallelization methods used in the code. For massively computation, the "block-patch" two level parallel domain decomposition strategy is provided to scale the computation size to thousands of processor cores. Based on the geometry information, the mesh is reconfigured using the adaptive technology to get rid of invalid grid cells, and thus the storage amount and parallel execution time decrease sharply. On the basis of traditional Boris' successive over relaxation (SOR) iteration method, a parallel Poisson solver on irregular domains is provided with red and black ordering technology and geometry constraints. With the above methods, NEPTUNE can get 51. 8% parallel efficiency on 1 024 cores when simulating MILO devices.%介绍了NEPTUNE软件采用的一些并行计算方法:采用“块-网格片”二层并行区域分解方法,使计算规模能够扩展到上千个处理器核.基于复杂几何特征采用自适应技术并行生成结构网格,在原有规则区域的基础上剔除无效网格,大幅降低了存储量和并行执行时间.在经典的Boris和SOR迭代方法基础上,采用红黑排序和几何约束,提出了非规则区域上的Poisson方程并行求解方法.采用这些方法后,当使用NEP-TUNE软件模拟MILO器件时,可在1024个处理器核上获得51.8%的并行效率.

  13. Steganography Method for Advanced Audio Coding%一种以AAC压缩音频为载体的隐写方法

    Institute of Scientific and Technical Information of China (English)

    王昱洁; 郭立; 王翠平

    2011-01-01

    通过对AAC编码原理的研究,提出一种基于MDCT量化系数小值区的秘密信息嵌入方法,从而实现了一种能在AAC压缩文件中隐藏大量秘密信息的隐写算法.算法先部分解码载体AAC文件,根据码表搜索出小值区,再通过码字得到一组量化系数,按规则修改每组的最后一个量化系数,然后进行部分编码得到嵌入后的AAC文件.该隐写算法可实现盲提取,且运算复杂度较低.实验结果表明,算法的嵌入容量较高,具有良好的不可感知性,并具有一定的抗隐写分析性,能够抵抗常用的LSB隐写分析方法以及Harmsen提出的基于加性噪声的隐写分析方法.%An information hiding method on little data region of quantized MDCT coefficients which can embed a great deal of secret information into AAC files is proposed based on the research of AAC coding standard. The proposed algorithm first partly decodes the cover AAC file to search for the little data region under code books, and then gets a set of quantized coefficients by a code word, and modifies the last quantized coefficient according to rules, and finally partly encodes to get the embedded AAC file. The secret information can be extracted blindly, and the computational complexity is low. Experimental results reveal that the proposed algorithm can obtain higher hidden data capacity, furthermore, its imperceptibility is good and it can resist the common steganalysis methods of LSB and the steganalysis method of additive noise proposed by Harmsen.

  14. Requirements of a Better Secure Program Coding

    Directory of Open Access Journals (Sweden)

    Marius POPA

    2012-01-01

    Full Text Available Secure program coding refers to how manage the risks determined by the security breaches because of the program source code. The papers reviews the best practices must be doing during the software development life cycle for secure software assurance, the methods and techniques used for a secure coding assurance, the most known and common vulnerabilities determined by a bad coding process and how the security risks are managed and mitigated. As a tool of the better secure program coding, the code review process is presented, together with objective measures for code review assurance and estimation of the effort for the code improvement.

  15. NOVEL BIPHASE CODE -INTEGRATED SIDELOBE SUPPRESSION CODE

    Institute of Scientific and Technical Information of China (English)

    Wang Feixue; Ou Gang; Zhuang Zhaowen

    2004-01-01

    A kind of novel binary phase code named sidelobe suppression code is proposed in this paper. It is defined to be the code whose corresponding optimal sidelobe suppression filter outputs the minimum sidelobes. It is shown that there do exist sidelobe suppression codes better than the conventional optimal codes-Barker codes. For example, the sidelobe suppression code of length 11 with filter of length 39 has better sidelobe level up to 17dB than that of Barker code with the same code length and filter length.

  16. A class of Sudan-decodable codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2000-01-01

    In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based...... on algebraic-geometry codes and the decoding algorithms are generalized accordingly. Comparisons with Reed-Solomon and Hermitian codes are made....

  17. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  18. FY08 LDRD Final Report A New Method for Wave Propagation in Elastic Media LDRD Project Tracking Code: 05-ERD-079

    Energy Technology Data Exchange (ETDEWEB)

    Petersson, A

    2009-01-29

    The LDRD project 'A New Method for Wave Propagation in Elastic Media' developed several improvements to the traditional finite difference technique for seismic wave propagation, including a summation-by-parts discretization which is provably stable for arbitrary heterogeneous materials, an accurate treatment of non-planar topography, local mesh refinement, and stable outflow boundary conditions. This project also implemented these techniques in a parallel open source computer code called WPP, and participated in several seismic modeling efforts to simulate ground motion due to earthquakes in Northern California. This research has been documented in six individual publications which are summarized in this report. Of these publications, four are published refereed journal articles, one is an accepted refereed journal article which has not yet been published, and one is a non-refereed software manual. The report concludes with a discussion of future research directions and exit plan.

  19. Superimposed Code Theorectic Analysis of DNA Codes and DNA Computing

    Science.gov (United States)

    2010-03-01

    Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008 IEEE Proceedings of International Symposium on Information Theory, pp. 2292...5, June 2008, pp. 525-34. 32 28. A. Macula, et al., “Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008...combinatorial method of bio-memory design and detection that encodes item or process information as numerical sequences represented in DNA. ComDMem is a

  20. Computational performance of SequenceL coding of the lattice Boltzmann method for multi-particle flow simulations

    Science.gov (United States)

    Başağaoğlu, Hakan; Blount, Justin; Blount, Jarred; Nelson, Bryant; Succi, Sauro; Westhart, Phil M.; Harwell, John R.

    2017-04-01

    This paper reports, for the first time, the computational performance of SequenceL for mesoscale simulations of large numbers of particles in a microfluidic device via the lattice-Boltzmann method. The performance of SequenceL simulations was assessed against the optimized serial and parallelized (via OpenMP directives) FORTRAN90 simulations. At present, OpenMP directives were not included in inter-particle and particle-wall repulsive (steric) interaction calculations due to difficulties that arose from inter-iteration dependencies between consecutive iterations of the do-loops. SequenceL simulations, on the other hand, relied on built-in automatic parallelism. Under these conditions, numerical simulations revealed that the parallelized FORTRAN90 outran the performance of SequenceL by a factor of 2.5 or more when the number of particles was 100 or less. SequenceL, however, outran the performance of the parallelized FORTRAN90 by a factor of 1.3 when the number of particles was 300. Our results show that when the number of particles increased by 30-fold, the computational time of SequenceL simulations increased linearly by a factor of 1.5, as compared to a 3.2-fold increase in serial and a 7.7-fold increase in parallelized FORTRAN90 simulations. Considering SequenceL's efficient built-in parallelism that led to a relatively small increase in computational time with increased number of particles, it could be a promising programming language for computationally-efficient mesoscale simulations of large numbers of particles in microfluidic experiments.