Huffman-based code compression techniques for embedded processors
Bonny, Mohamed Talal
2010-09-01
The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures
File compression and encryption based on LLS and arithmetic coding
Yu, Changzhi; Li, Hengjian; Wang, Xiyu
2018-03-01
e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.
Lossless Image Compression Based on Multiple-Tables Arithmetic Coding
Directory of Open Access Journals (Sweden)
Rung-Ching Chen
2009-01-01
Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding
Directory of Open Access Journals (Sweden)
Yongjian Nian
2013-01-01
Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.
Huffman-based code compression techniques for embedded processors
Bonny, Mohamed Talal; Henkel, Jö rg
2010-01-01
% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures, namely ARM and MIPS. © 2010 ACM.
Multiple Description Coding with Feedback Based Network Compression
DEFF Research Database (Denmark)
Sørensen, Jesper Hemming; Østergaard, Jan; Popovski, Petar
2010-01-01
and an intermediate node, respectively. A trade-off exists between reducing the delay of the feedback by adapting in the vicinity of the receiver and increasing the gain from compression by adapting close to the source. The analysis shows that adaptation in the network provides a better trade-off than adaptation...
Adaptive bit plane quadtree-based block truncation coding for image compression
Li, Shenda; Wang, Jin; Zhu, Qing
2018-04-01
Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.
Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique
Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi
Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.
International Nuclear Information System (INIS)
Chouakri, S A; Djaafri, O; Taleb-Ahmed, A
2013-01-01
We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly
A high capacity text steganography scheme based on LZW compression and color coding
Directory of Open Access Journals (Sweden)
Aruna Malik
2017-02-01
Full Text Available In this paper, capacity and security issues of text steganography have been considered by employing LZW compression technique and color coding based approach. The proposed technique uses the forward mail platform to hide the secret data. This algorithm first compresses secret data and then hides the compressed secret data into the email addresses and also in the cover message of the email. The secret data bits are embedded in the message (or cover text by making it colored using a color coding table. Experimental results show that the proposed method not only produces a high embedding capacity but also reduces computational complexity. Moreover, the security of the proposed method is significantly improved by employing stego keys. The superiority of the proposed method has been experimentally verified by comparing with recently developed existing techniques.
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
Kim, Dong-Sun; Kwon, Jin-San
2014-09-18
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor.
Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan
2015-10-01
In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.
Directory of Open Access Journals (Sweden)
Tinghua Zhang
2018-02-01
Full Text Available Coded Aperture Compressive Temporal Imaging (CACTI can afford low-cost temporal super-resolution (SR, but limits are imposed by noise and compression ratio on reconstruction quality. To utilize inter-frame redundant information from multiple observations and sparsity in multi-transform domains, a robust reconstruction approach based on maximum a posteriori probability and Markov random field (MAP-MRF model for CACTI is proposed. The proposed approach adopts a weighted 3D neighbor system (WNS and the coordinate descent method to perform joint estimation of model parameters, to achieve the robust super-resolution reconstruction. The proposed multi-reconstruction algorithm considers both total variation (TV and ℓ 2 , 1 norm in wavelet domain to address the minimization problem for compressive sensing, and solves it using an accelerated generalized alternating projection algorithm. The weighting coefficient for different regularizations and frames is resolved by the motion characteristics of pixels. The proposed approach can provide high visual quality in the foreground and background of a scene simultaneously and enhance the fidelity of the reconstruction results. Simulation results have verified the efficacy of our new optimization framework and the proposed reconstruction approach.
A Test Data Compression Scheme Based on Irrational Numbers Stored Coding
Directory of Open Access Journals (Sweden)
Hai-feng Wu
2014-01-01
Full Text Available Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS, is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.
A test data compression scheme based on irrational numbers stored coding.
Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan
2014-01-01
Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.
Subband Coding Methods for Seismic Data Compression
Kiely, A.; Pollara, F.
1995-01-01
This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
Optical identity authentication technique based on compressive ghost imaging with QR code
Wenjie, Zhan; Leihong, Zhang; Xi, Zeng; Yi, Kang
2018-04-01
With the rapid development of computer technology, information security has attracted more and more attention. It is not only related to the information and property security of individuals and enterprises, but also to the security and social stability of a country. Identity authentication is the first line of defense in information security. In authentication systems, response time and security are the most important factors. An optical authentication technology based on compressive ghost imaging with QR codes is proposed in this paper. The scheme can be authenticated with a small number of samples. Therefore, the response time of the algorithm is short. At the same time, the algorithm can resist certain noise attacks, so it offers good security.
Development of compressible density-based steam explosion simulation code ESE-2
International Nuclear Information System (INIS)
Leskovar, M.
2004-01-01
A steam explosion is a fuel coolant interaction process by which the energy of the corium is transferred to water in a time-scale smaller than the time-scale for system pressure relief and induces dynamic loading of surrounding structures. A strong enough steam explosion in a nuclear power plant could jeopardize the containment integrity and so lead to a direct release of radioactive material to the environment. To help finding answers on open questions regarding steam explosion understanding and modelling, the steam explosion simulation code ESE-2 is being developed. In contrast to the developed simulation code ESE-1, where the multiphase flow equations are solved with pressure-based numerical methods (best suited for incompressible flow), in ESE-2 densitybased numerical methods (best suited for compressible flow) are used. Therefore ESE-2 will enable an accurate treatment of the whole steam explosion process, which consists of the premixing, triggering, propagation and expansion phase. In the paper the basic characteristics of the mathematical model and the numerical solution procedure in ESE-2 are described. The essence of the numerical treatment is that the convective terms in the multiphase flow equations are calculated with the AUSM+ scheme, which is very time efficient since no field-by-field wave decomposition is needed, using second order accurate discretization. (author)
Entropy, Coding and Data Compression
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 9. Entropy, Coding and Data Compression. S Natarajan. General Article Volume 6 Issue 9 September 2001 pp 35-45. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/09/0035-0045 ...
Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder
Glover, Daniel R. (Inventor)
1995-01-01
Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.
Maglogiannis, Ilias; Doukas, Charalampos; Kormentzas, George; Pliakas, Thomas
2009-07-01
Most of the commercial medical image viewers do not provide scalability in image compression and/or region of interest (ROI) encoding/decoding. Furthermore, these viewers do not take into consideration the special requirements and needs of a heterogeneous radio setting that is constituted by different access technologies [e.g., general packet radio services (GPRS)/ universal mobile telecommunications system (UMTS), wireless local area network (WLAN), and digital video broadcasting (DVB-H)]. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. The proposed application enables scalable wavelet-based compression, retrieval, and decompression of DICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application is appropriate for use by mobile devices activating in heterogeneous radio settings. In this context, performance issues regarding the usage of the proposed application in the case of a prototype heterogeneous system setup are also discussed.
An efficient adaptive arithmetic coding image compression technology
International Nuclear Information System (INIS)
Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei
2011-01-01
This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Characterization of coded random access with compressive sensing based multi user detection
DEFF Research Database (Denmark)
Ji, Yalei; Stefanovic, Cedomir; Bockelmann, Carsten
2014-01-01
The emergence of Machine-to-Machine (M2M) communication requires new Medium Access Control (MAC) schemes and physical (PHY) layer concepts to support a massive number of access requests. The concept of coded random access, introduced recently, greatly outperforms other random access methods...... coded random access with CS-MUD on the PHY layer and show very promising results for the resulting protocol....
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-04-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.
Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images
Directory of Open Access Journals (Sweden)
Barni Mauro
2007-01-01
Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.
Compressed Domain Packet Loss Concealment of Sinusoidally Coded Speech
DEFF Research Database (Denmark)
Rødbro, Christoffer A.; Christensen, Mads Græsbøll; Andersen, Søren Vang
2003-01-01
We consider the problem of packet loss concealment for voice over IP (VoIP). The speech signal is compressed at the transmitter using a sinusoidal coding scheme working at 8 kbit/s. At the receiver, packet loss concealment is carried out working directly on the quantized sinusoidal parameters......, based on time-scaling of the packets surrounding the missing ones. Subjective listening tests show promising results indicating the potential of sinusoidal speech coding for VoIP....
Coding Strategies and Implementations of Compressive Sensing
Tsai, Tsung-Han
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or
Code compression for VLIW embedded processors
Piccinelli, Emiliano; Sannino, Roberto
2004-04-01
The implementation of processors for embedded systems implies various issues: main constraints are cost, power dissipation and die area. On the other side, new terminals perform functions that require more computational flexibility and effort. Long code streams must be loaded into memories, which are expensive and power consuming, to run on DSPs or CPUs. To overcome this issue, the "SlimCode" proprietary algorithm presented in this paper (patent pending technology) can reduce the dimensions of the program memory. It can run offline and work directly on the binary code the compiler generates, by compressing it and creating a new binary file, about 40% smaller than the original one, to be loaded into the program memory of the processor. The decompression unit will be a small ASIC, placed between the Memory Controller and the System bus of the processor, keeping unchanged the internal CPU architecture: this implies that the methodology is completely transparent to the core. We present comparisons versus the state-of-the-art IBM Codepack algorithm, along with its architectural implementation into the ST200 VLIW family core.
A seismic data compression system using subband coding
Kiely, A. B.; Pollara, F.
1995-01-01
This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
Telemetry advances in data compression and channel coding
Miller, Warner H.; Morakis, James C.; Yeh, Pen-Shu
1990-01-01
Addressed in this paper is the dependence of telecommunication channel, forward error correcting coding and source data compression coding on integrated circuit technology. Emphasis is placed on real time high speed Reed Solomon (RS) decoding using full custom VLSI technology. Performance curves of NASA's standard channel coder and a proposed standard lossless data compression coder are presented.
Bit-Wise Arithmetic Coding For Compression Of Data
Kiely, Aaron
1996-01-01
Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.
Coding For Compression Of Low-Entropy Data
Yeh, Pen-Shu
1994-01-01
Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.
Bit-wise arithmetic coding for data compression
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
Rice, R. F.; Hilbert, E. E. (Inventor)
1976-01-01
A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.
Video Coding Technique using MPEG Compression Standards
African Journals Online (AJOL)
Akorede
The two dimensional discrete cosine transform (2-D DCT) is an integral part of video and image compression ... solution for the optimum trade-off by applying rate-distortion theory has been ..... Int. J. the computer, the internet and management,.
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
Compression and channel-coding algorithms for high-definition television signals
Alparone, Luciano; Benelli, Giuliano; Fabbri, A. F.
1990-09-01
In this paper results of investigations about the effects of channel errors in the transmission of images compressed by means of techniques based on Discrete Cosine Transform (DOT) and Vector Quantization (VQ) are presented. Since compressed images are heavily degraded by noise in the transmission channel more seriously for what concern VQ-coded images theoretical studies and simulations are presented in order to define and evaluate this degradation. Some channel coding schemes are proposed in order to protect information during transmission. Hamming codes (7 (15 and (31 have been used for DCT-compressed images more powerful codes such as Golay (23 for VQ-compressed images. Performances attainable with softdecoding techniques are also evaluated better quality images have been obtained than using classical hard decoding techniques. All tests have been carried out to simulate the transmission of a digital image from HDTV signal over an AWGN channel with P5K modulation.
Filtering, Coding, and Compression with Malvar Wavelets
1993-12-01
speech coding techniques being investigated by the military (38). Imagery: Space imagery often requires adaptive restoration to deblur out-of-focus...and blurred image, find an estimate of the ideal image using a priori information about the blur, noise , and the ideal image" (12). The research for...recording can be described as the original signal convolved with impulses , which appear as echoes in the seismic event. The term deconvolution indicates
Bi-level image compression with tree coding
DEFF Research Database (Denmark)
Martins, Bo; Forchhammer, Søren
1996-01-01
Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...
Wavelet-based audio embedding and audio/video compression
Mendenhall, Michael J.; Claypoole, Roger L., Jr.
2001-12-01
Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.
Biferale, L.; Mantovani, F.; Pivanti, M.; Pozzati, F.; Sbragaglia, M.; Schifano, S.F.; Toschi, F.; Tripiccione, R.
2011-01-01
We develop a Lattice Boltzmann code for computational fluid-dynamics and optimize it for massively parallel systems based on multi-core processors. Our code describes 2D multi-phase compressible flows. We analyze the performance bottlenecks that we find as we gradually expose a larger fraction of
Development of 1D Liner Compression Code for IDL
Shimazu, Akihisa; Slough, John; Pancotti, Anthony
2015-11-01
A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.
JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL
Institute of Scientific and Technical Information of China (English)
Xiao Jiang; Wu Chengke
2005-01-01
In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.
Peeling Decoding of LDPC Codes with Applications in Compressed Sensing
Directory of Open Access Journals (Sweden)
Weijun Zeng
2016-01-01
Full Text Available We present a new approach for the analysis of iterative peeling decoding recovery algorithms in the context of Low-Density Parity-Check (LDPC codes and compressed sensing. The iterative recovery algorithm is particularly interesting for its low measurement cost and low computational complexity. The asymptotic analysis can track the evolution of the fraction of unrecovered signal elements in each iteration, which is similar to the well-known density evolution analysis in the context of LDPC decoding algorithm. Our analysis shows that there exists a threshold on the density factor; if under this threshold, the recovery algorithm is successful; otherwise it will fail. Simulation results are also provided for verifying the agreement between the proposed asymptotic analysis and recovery algorithm. Compared with existing works of peeling decoding algorithm, focusing on the failure probability of the recovery algorithm, our proposed approach gives accurate evolution of performance with different parameters of measurement matrices and is easy to implement. We also show that the peeling decoding algorithm performs better than other schemes based on LDPC codes.
Development of Ultrasonic Pulse Compression Using Golay Codes
International Nuclear Information System (INIS)
Kim, Young H.; Kim, Young Gil; Jeong, Peter
1994-01-01
Conventional ultrasonic flaw detection system uses a large amplitude narrow pulse to excite a transducer. However, these systems are limited in pulse energy. An excessively large amplitude causes a dielectric breakage of the transducer, and an excessively long pulse causes decrease of the resolution. Using the pulse compression, a long pulse of pseudorandom signal can be used without sacrificing resolution by signal correlation. In the present work, the pulse compression technique was implemented into an ultrasonic system. Golay code was used as a pseudorandom signal in this system, since pair sum of autocorrelations has no sidelobe. The equivalent input pulse of the Golay code was derived to analyze the pulse compression system. Throughout the experiment, the pulse compression technique has demonstrated for its improved SNR(signal to noise ratio) by reducing the system's white noise. And the experimental data also indicated that the SNR enhancement was proportional to the square root of the code length used. The technique seems to perform particularly well with highly energy-absorbent materials such as polymers, plastics and rubbers
SNR in ultrasonic pluse compression using Golay codes
International Nuclear Information System (INIS)
Kim, Young Hwan; Kim, Young Gil; Jeong, Peter
1994-01-01
The conventional ultrasonic flaw detection system uses a large amplitude narrow pulse to excite a transducer, however, these systems are limited in average transmit power. An excessively large amplitude causes a dielectric breakage of the transducer, and an excessively long pulse cuases decrease of the resolution. Using the pulse compression, a long pulse of psudorandom signal can be used without sacrificing resolution by signal correlation. In the present work, the pulse compression technique was utilized to the ultrasonic system. Golay code was used as a psudorandom signal in this system, since pair sum of auto-correlations has not sidelobe. The equivalent input pulse of the Golay code was proposed to analyze the pulse compression system. In experiment, the material type, material thickness and code length were considered. As results, pulse compression system considerably reduced system's white noise, and approximately 30 dB improvement in SNR was obtained over the conventional ultrasonic system. The technique seems to perform particularly well with highly energy-absorbent materials such as polymers, plastics and rubbers.
Tools for signal compression applications to speech and audio coding
Moreau, Nicolas
2013-01-01
This book presents tools and algorithms required to compress/uncompress signals such as speech and music. These algorithms are largely used in mobile phones, DVD players, HDTV sets, etc. In a first rather theoretical part, this book presents the standard tools used in compression systems: scalar and vector quantization, predictive quantization, transform quantization, entropy coding. In particular we show the consistency between these different tools. The second part explains how these tools are used in the latest speech and audio coders. The third part gives Matlab programs simulating t
A statistical–mechanical view on source coding: physical compression and data compression
International Nuclear Information System (INIS)
Merhav, Neri
2011-01-01
We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics
Wavelet-based compression of pathological images for telemedicine applications
Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun
2000-05-01
In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.
SRComp: short read sequence compression using burstsort and Elias omega coding.
Directory of Open Access Journals (Sweden)
Jeremy John Selva
Full Text Available Next-generation sequencing (NGS technologies permit the rapid production of vast amounts of data at low cost. Economical data storage and transmission hence becomes an increasingly important challenge for NGS experiments. In this paper, we introduce a new non-reference based read sequence compression tool called SRComp. It works by first employing a fast string-sorting algorithm called burstsort to sort read sequences in lexicographical order and then Elias omega-based integer coding to encode the sorted read sequences. SRComp has been benchmarked on four large NGS datasets, where experimental results show that it can run 5-35 times faster than current state-of-the-art read sequence compression tools such as BEETL and SCALCE, while retaining comparable compression efficiency for large collections of short read sequences. SRComp is a read sequence compression tool that is particularly valuable in certain applications where compression time is of major concern.
Non-US data compression and coding research. FASAC Technical Assessment Report
Energy Technology Data Exchange (ETDEWEB)
Gray, R.M.; Cohn, M.; Craver, L.W.; Gersho, A.; Lookabaugh, T.; Pollara, F.; Vetterli, M.
1993-11-01
This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.
HVS-based medical image compression
Energy Technology Data Exchange (ETDEWEB)
Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)
2005-07-01
Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.
HVS-based medical image compression
International Nuclear Information System (INIS)
Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao
2005-01-01
Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time
International Nuclear Information System (INIS)
Moravie, Philippe
1997-01-01
Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT
Directory of Open Access Journals (Sweden)
Ran Li
2018-04-01
Full Text Available Aimed at a low-energy consumption of Green Internet of Things (IoT, this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
Birdsell, D.; Karra, S.; Rajaram, H.
2017-12-01
The governing equations for subsurface flow codes in deformable porous media are derived from the fluid mass balance equation. One class of these codes, which we call general subsurface flow (GSF) codes, does not explicitly track the motion of the solid porous media but does accept general constitutive relations for porosity, density, and fluid flux. Examples of GSF codes include PFLOTRAN, FEHM, STOMP, and TOUGH2. Meanwhile, analytical and numerical solutions based on the groundwater flow equation have assumed forms for porosity, density, and fluid flux. We review the derivation of the groundwater flow equation, which uses the form of Darcy's equation that accounts for the velocity of fluids with respect to solids and defines the soil matrix compressibility accordingly. We then show how GSF codes have a different governing equation if they use the form of Darcy's equation that is written only in terms of fluid velocity. The difference is seen in the porosity change, which is part of the specific storage term in the groundwater flow equation. We propose an alternative definition of soil matrix compressibility to correct for the untracked solid velocity. Simulation results show significantly less error for our new compressibility definition than the traditional compressibility when compared to analytical solutions from the groundwater literature. For example, the error in one calculation for a pumped sandstone aquifer goes from 940 to <70 Pa when the new compressibility is used. Code users and developers need to be aware of assumptions in the governing equations and constitutive relations in subsurface flow codes, and our newly-proposed compressibility function should be incorporated into GSF codes.
High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering
Directory of Open Access Journals (Sweden)
Nelson Eduardo Diaz
2015-09-01
Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed. In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.
Image Compression Based On Wavelet, Polynomial and Quadtree
Directory of Open Access Journals (Sweden)
Bushra A. SULTAN
2011-01-01
Full Text Available In this paper a simple and fast image compression scheme is proposed, it is based on using wavelet transform to decompose the image signal and then using polynomial approximation to prune the smoothing component of the image band. The architect of proposed coding scheme is high synthetic where the error produced due to polynomial approximation in addition to the detail sub-band data are coded using both quantization and Quadtree spatial coding. As a last stage of the encoding process shift encoding is used as a simple and efficient entropy encoder to compress the outcomes of the previous stage.The test results indicate that the proposed system can produce a promising compression performance while preserving the image quality level.
Atomic effect algebras with compression bases
International Nuclear Information System (INIS)
Caragheorgheopol, Dan; Tkadlec, Josef
2011-01-01
Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.
Lossless quantum data compression and variable-length coding
International Nuclear Information System (INIS)
Bostroem, Kim; Felbinger, Timo
2002-01-01
In order to compress quantum messages without loss of information it is necessary to allow the length of the encoded messages to vary. We develop a general framework for variable-length quantum messages in close analogy to the classical case and show that lossless compression is only possible if the message to be compressed is known to the sender. The lossless compression of an ensemble of messages is bounded from below by its von-Neumann entropy. We show that it is possible to reduce the number of qbits passing through a quantum channel even below the von Neumann entropy by adding a classical side channel. We give an explicit communication protocol that realizes lossless and instantaneous quantum data compression and apply it to a simple example. This protocol can be used for both online quantum communication and storage of quantum data
Single stock dynamics on high-frequency data: from a compressed coding perspective.
Directory of Open Access Journals (Sweden)
Hsieh Fushing
Full Text Available High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS, and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.
Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.
Rice, R. F.
1972-01-01
The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.
Physics Based Modeling of Compressible Turbulance
2016-11-07
AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE
On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.
Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi
2018-02-01
On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.
Directory of Open Access Journals (Sweden)
Hanxiao Wu
2012-10-01
Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.
ESE a 2D compressible multiphase flow code developed for MFCI analysis - code validation
International Nuclear Information System (INIS)
Leskovar, M.; Mavko, B.
1998-01-01
ESE (Evaluation of Steam Explosions) is a general second order accurate two-dimensional compressible multiphase flow computer code. It has been developed to model the interaction of molten core debris with water during the first premixing stage of a steam explosion. A steam explosion is a physical event, which may occur during a severe reactor accident following core meltdown when the molten fuel comes into contact with the coolant water. Since the exchanges of mass, momentum and energy are regime dependent, different exchange laws have been incorporated in ESE for the major flow regimes. With ESE a number of premixing experiments performed at the Oxford University and at the QUEOS facility at Forschungszentrum Karlsruhe has been simulated. In these premixing experiments different jets of spheres were injected in a water poll. The ESE validation plan was carefully chosen, starting from very simple, well-defined problems, and gradually working up to more complicated ones. The results of ESE simulations, which were compared to experimental data and also to first order accurate calculations, are presented in form graphs. Most of the ESE results agree qualitatively as quantitatively reasonably well with experimental data and in general better than the results obtained with the first order accurate calculation.(author)
Composite Techniques Based Color Image Compression
Directory of Open Access Journals (Sweden)
Zainab Ibrahim Abood
2017-03-01
Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.
Reliability-Based Code Calibration
DEFF Research Database (Denmark)
Faber, M.H.; Sørensen, John Dalsgaard
2003-01-01
The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....
WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing
Directory of Open Access Journals (Sweden)
Zhouzhou Liu
2015-01-01
Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.
Multispectral data compression through transform coding and block quantization
Ready, P. J.; Wintz, P. A.
1972-01-01
Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.
System Based Code: Principal Concept
International Nuclear Information System (INIS)
Yasuhide Asada; Masanori Tashimo; Masahiro Ueta
2002-01-01
This paper introduces a concept of the 'System Based Code' which has initially been proposed by the authors intending to give nuclear industry a leap of progress in the system reliability, performance improvement, and cost reduction. The concept of the System Based Code intends to give a theoretical procedure to optimize the reliability of the system by administrating every related engineering requirement throughout the life of the system from design to decommissioning. (authors)
Fractal Image Compression Based on High Entropy Values Technique
Directory of Open Access Journals (Sweden)
Douaa Younis Abbaas
2018-04-01
Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.
An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security
Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia
2018-01-01
In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.
Partial Encryption of Entropy-Coded Video Compression Using Coupled Chaotic Maps
Directory of Open Access Journals (Sweden)
Fadi Almasalha
2014-10-01
Full Text Available Due to pervasive communication infrastructures, a plethora of enabling technologies is being developed over mobile and wired networks. Among these, video streaming services over IP are the most challenging in terms of quality, real-time requirements and security. In this paper, we propose a novel scheme to efficiently secure variable length coded (VLC multimedia bit streams, such as H.264. It is based on code word error diffusion and variable size segment shuffling. The codeword diffusion and the shuffling mechanisms are based on random operations from a secure and computationally efficient chaos-based pseudo-random number generator. The proposed scheme is ubiquitous to the end users and can be deployed at any node in the network. It provides different levels of security, with encrypted data volume fluctuating between 5.5–17%. It works on the compressed bit stream without requiring any decoding. It provides excellent encryption speeds on different platforms, including mobile devices. It is 200% faster and 150% more power efficient when compared with AES software-based full encryption schemes. Regarding security, the scheme is robust to well-known attacks in the literature, such as brute force and known/chosen plain text attacks.
A New Class of Pulse Compression Codes and Techniques.
1980-03-26
04 11 01 12 02 13 03 14 OA DIALFL I NOTE’ BOT TRANSFORM AND DIGITAL FILTER NETWORK INVERSE TRANSFORM DRIVE FRANK CODE SAME DIGITAL FILTER ; ! ! I I...function from circuit of Fig. I with N =9 TRANSFORM INVERSE TRANSFORM SINGLE _WORD S1A ~b,.ISR -.- ISR I- SR I--~ SR SIC-- I1GENERATOR 0 fJFJ $ J$ .. J...FOR I 1 1 13 11 12 13 FROM RECEIVER TRANSMIT Q- j ~ ~ 01 02 03 0, 02 03 11 01 12 02 13 03 4 1 1 ~ 4 NOTrE: BOTH TRANSFORM ANDI I I I INVERSE TRANSFORM DRIVE
The possibilities of compressed sensing based migration
Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali
2013-01-01
Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.
The possibilities of compressed sensing based migration
Aldawood, Ali
2013-09-22
Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.
Compression-based inference on graph data
Bloem, P.; van den Bosch, A.; Heskes, T.; van Leeuwen, D.
2013-01-01
We investigate the use of compression-based learning on graph data. General purpose compressors operate on bitstrings or other sequential representations. A single graph can be represented sequentially in many ways, which may in uence the performance of sequential compressors. Using Normalized
Password Authentication Based on Fractal Coding Scheme
Directory of Open Access Journals (Sweden)
Nadia M. G. Al-Saidi
2012-01-01
Full Text Available Password authentication is a mechanism used to authenticate user identity over insecure communication channel. In this paper, a new method to improve the security of password authentication is proposed. It is based on the compression capability of the fractal image coding to provide an authorized user a secure access to registration and login process. In the proposed scheme, a hashed password string is generated and encrypted to be captured together with the user identity using text to image mechanisms. The advantage of fractal image coding is to be used to securely send the compressed image data through a nonsecured communication channel to the server. The verification of client information with the database system is achieved in the server to authenticate the legal user. The encrypted hashed password in the decoded fractal image is recognized using optical character recognition. The authentication process is performed after a successful verification of the client identity by comparing the decrypted hashed password with those which was stored in the database system. The system is analyzed and discussed from the attacker’s viewpoint. A security comparison is performed to show that the proposed scheme provides an essential security requirement, while their efficiency makes it easier to be applied alone or in hybrid with other security methods. Computer simulation and statistical analysis are presented.
Image Coding Based on Address Vector Quantization.
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing
Facial Image Compression Based on Structured Codebooks in Overcomplete Domain
Directory of Open Access Journals (Sweden)
Vila-Forcén JE
2006-01-01
Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Optimal, Reliability-Based Code Calibration
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
2002-01-01
Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....
Schwarz-based algorithms for compressible flows
Energy Technology Data Exchange (ETDEWEB)
Tidriri, M.D. [ICASE, Hampton, VA (United States)
1996-12-31
To compute steady compressible flows one often uses an implicit discretization approach which leads to a large sparse linear system that must be solved at each time step. In the derivation of this system one often uses a defect-correction procedure, in which the left-hand side of the system is discretized with a lower order approximation than that used for the right-hand side. This is due to storage considerations and computational complexity, and also to the fact that the resulting lower order matrix is better conditioned than the higher order matrix. The resulting schemes are only moderately implicit. In the case of structured, body-fitted grids, the linear system can easily be solved using approximate factorization (AF), which is among the most widely used methods for such grids. However, for unstructured grids, such techniques are no longer valid, and the system is solved using direct or iterative techniques. Because of the prohibitive computational costs and large memory requirements for the solution of compressible flows, iterative methods are preferred. In these defect-correction methods, which are implemented in most CFD computer codes, the mismatch in the right and left hand side operators, together with explicit treatment of the boundary conditions, lead to a severely limited CFL number, which results in a slow convergence to steady state aerodynamic solutions. Many authors have tried to replace explicit boundary conditions with implicit ones. Although they clearly demonstrate that high CFL numbers are possible, the reduction in CPU time is not clear cut.
Verification testing of the compression performance of the HEVC screen content coding extensions
Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng
2017-09-01
This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.
Compressive sensing based ptychography image encryption
Rawat, Nitin
2015-09-01
A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.
Duong, Hieu N.; Snasel, Vaclav
2016-01-01
We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708
Directory of Open Access Journals (Sweden)
Vu H. Nguyen
2016-01-01
Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.
DEFF Research Database (Denmark)
Encina Llamas, Gerard; M. Harte, James; Epp, Bastian
2015-01-01
. Evaluation of these properties provides information about the health state of the system. It has been shown that a loss of outer hair cells leads to a reduction in peripheral compression. It has also recently been shown in animal studies that noise over-exposure, producing temporary threshold shifts, can....... The results indicate that the slope of the ASSR level growth function can be used to estimate peripheral compression simultaneously at four frequencies below 60 dB SPL, while the slope above 60 dB SPL may provide information about the integrity of intensity coding of low-SR fibers.......The compressive nonlinearity of the auditory system is assumed to be an epiphenomenon of a healthy cochlea and, particularly, of outer-hair cell function. Another ability of the healthy auditory system is to enable communication in acoustical environments with high-level background noises...
Islam, Muhammad F.; Islam, Mohammed N.
2012-04-01
The objective of this paper is to develop a novel approach for encryption and compression of biometric information utilizing orthogonal coding and steganography techniques. Multiple biometric signatures are encrypted individually using orthogonal codes and then multiplexed together to form a single image, which is then embedded in a cover image using the proposed steganography technique. The proposed technique employs three least significant bits for this purpose and a secret key is developed to choose one from among these bits to be replaced by the corresponding bit of the biometric image. The proposed technique offers secure transmission of multiple biometric signatures in an identification document which will be protected from unauthorized steganalysis attempt.
Edge-preserving Intra Depth Coding based on Context-coding and H.264/AVC
DEFF Research Database (Denmark)
Zamarin, Marco; Salmistraro, Matteo; Forchhammer, Søren
2013-01-01
Depth map coding plays a crucial role in 3D Video communication systems based on the “Multi-view Video plus Depth” representation as view synthesis performance is strongly affected by the accuracy of depth information, especially at edges in the depth map image. In this paper an efficient algorithm...... for edge-preserving intra depth compression based on H.264/AVC is presented. The proposed method introduces a new Intra mode specifically targeted to depth macroblocks with arbitrarily shaped edges, which are typically not efficiently represented by DCT. Edge macroblocks are partitioned into two regions...... each approximated by a flat surface. Edge information is encoded by means of contextcoding with an adaptive template. As a novel element, the proposed method allows exploiting the edge structure of previously encoded edge macroblocks during the context-coding step to further increase compression...
Rice, R. F.
1974-01-01
End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.
Directory of Open Access Journals (Sweden)
Jerry D. Gibson
2016-06-01
Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.
Directory of Open Access Journals (Sweden)
Valenzise G
2009-01-01
Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.
DEFF Research Database (Denmark)
Bianchi, Federica; Santurette, Sébastien; Fereczkowski, Michal
2015-01-01
Recent physiological studies in animals showed that noise-induced sensorineural hearing loss (SNHL) increased the amplitude of envelope coding in single auditory-nerve fibers. The present study investigated whether SNHL in human listeners was associated with enhanced temporal envelope coding...... resolvability. For the unresolved conditions, all five HI listeners performed as good as or better than NH listeners with matching musical experience. Two HI listeners showed lower amplitude-modulation detection thresholds than NH listeners for low modulation rates, and one of these listeners also showed a loss......, whether this enhancement affected pitch discrimination performance, and whether loss of compression following SNHL was a potential factor in envelope coding enhancement. Envelope processing was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in a behavioral amplitude...
Adaptive variable-length coding for efficient compression of spacecraft television data.
Rice, R. F.; Plaunt, J. R.
1971-01-01
An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding
Directory of Open Access Journals (Sweden)
Shailesh Kamble
2016-12-01
Full Text Available Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression.
Subsampling-based compression and flow visualization
Energy Technology Data Exchange (ETDEWEB)
Agranovsky, Alexy; Camp, David; Joy, I; Childs, Hank
2016-01-19
As computational capabilities increasingly outpace disk speeds on leading supercomputers, scientists will, in turn, be increasingly unable to save their simulation data at its native resolution. One solution to this problem is to compress these data sets as they are generated and visualize the compressed results afterwards. We explore this approach, specifically subsampling velocity data and the resulting errors for particle advection-based flow visualization. We compare three techniques: random selection of subsamples, selection at regular locations corresponding to multi-resolution reduction, and introduce a novel technique for informed selection of subsamples. Furthermore, we explore an adaptive system which exchanges the subsampling budget over parallel tasks, to ensure that subsampling occurs at the highest rate in the areas that need it most. We perform supercomputing runs to measure the effectiveness of the selection and adaptation techniques. Overall, we find that adaptation is very effective, and, among selection techniques, our informed selection provides the most accurate results, followed by the multi-resolution selection, and with the worst accuracy coming from random subsamples.
Application of content-based image compression to telepathology
Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace
2002-05-01
Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.
Memory hierarchy using row-based compression
Loh, Gabriel H.; O'Connor, James M.
2016-10-25
A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.
Research on compressive sensing reconstruction algorithm based on total variation model
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
MULTISTAGE BITRATE REDUCTION IN ABSOLUTE MOMENT BLOCK TRUNCATION CODING FOR IMAGE COMPRESSION
Directory of Open Access Journals (Sweden)
S. Vimala
2012-05-01
Full Text Available Absolute Moment Block Truncation Coding (AMBTC is one of the lossy image compression techniques. The computational complexity involved is less and the quality of the reconstructed images is appreciable. The normal AMBTC method requires 2 bits per pixel (bpp. In this paper, two novel ideas have been incorporated as part of AMBTC method to improve the coding efficiency. Generally, the quality degrades with the reduction in the bit-rate. But in the proposed method, the quality of the reconstructed image increases with the decrease in the bit-rate. The proposed method has been tested with standard images like Lena, Barbara, Bridge, Boats and Cameraman. The results obtained are better than that of the existing AMBTC method in terms of bit-rate and the quality of the reconstructed images.
Accelerated Compressed Sensing Based CT Image Reconstruction.
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.
Accelerated Compressed Sensing Based CT Image Reconstruction
Directory of Open Access Journals (Sweden)
SayedMasoud Hashemi
2015-01-01
Full Text Available In X-ray computed tomography (CT an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.
Compression-Based Tools for Navigation with an Image Database
Directory of Open Access Journals (Sweden)
Giovanni Motta
2012-01-01
Full Text Available We present tools that can be used within a larger system referred to as a passive assistant. The system receives information from a mobile device, as well as information from an image database such as Google Street View, and employs image processing to provide useful information about a local urban environment to a user who is visually impaired. The first stage acquires and computes accurate location information, the second stage performs texture and color analysis of a scene, and the third stage provides specific object recognition and navigation information. These second and third stages rely on compression-based tools (dimensionality reduction, vector quantization, and coding that are enhanced by knowledge of (approximate location of objects.
Implementation of LT codes based on chaos
International Nuclear Information System (INIS)
Zhou Qian; Li Liang; Chen Zengqiang; Zhao Jiaxiang
2008-01-01
Fountain codes provide an efficient way to transfer information over erasure channels like the Internet. LT codes are the first codes fully realizing the digital fountain concept. They are asymptotically optimal rateless erasure codes with highly efficient encoding and decoding algorithms. In theory, for each encoding symbol of LT codes, its degree is randomly chosen according to a predetermined degree distribution, and its neighbours used to generate that encoding symbol are chosen uniformly at random. Practical implementation of LT codes usually realizes the randomness through pseudo-randomness number generator like linear congruential method. This paper applies the pseudo-randomness of chaotic sequence in the implementation of LT codes. Two Kent chaotic maps are used to determine the degree and neighbour(s) of each encoding symbol. It is shown that the implemented LT codes based on chaos perform better than the LT codes implemented by the traditional pseudo-randomness number generator. (general)
Compressed Sensing, Pseudodictionary-Based, Superresolution Reconstruction
Directory of Open Access Journals (Sweden)
Chun-mei Li
2016-01-01
Full Text Available The spatial resolution of digital images is the critical factor that affects photogrammetry precision. Single-frame, superresolution, image reconstruction is a typical underdetermined, inverse problem. To solve this type of problem, a compressive, sensing, pseudodictionary-based, superresolution reconstruction method is proposed in this study. The proposed method achieves pseudodictionary learning with an available low-resolution image and uses the K-SVD algorithm, which is based on the sparse characteristics of the digital image. Then, the sparse representation coefficient of the low-resolution image is obtained by solving the norm of l0 minimization problem, and the sparse coefficient and high-resolution pseudodictionary are used to reconstruct image tiles with high resolution. Finally, single-frame-image superresolution reconstruction is achieved. The proposed method is applied to photogrammetric images, and the experimental results indicate that the proposed method effectively increase image resolution, increase image information content, and achieve superresolution reconstruction. The reconstructed results are better than those obtained from traditional interpolation methods in aspect of visual effects and quantitative indicators.
Compressive sensing based algorithms for electronic defence
Mishra, Amit Kumar
2017-01-01
This book details some of the major developments in the implementation of compressive sensing in radio applications for electronic defense and warfare communication use. It provides a comprehensive background to the subject and at the same time describes some novel algorithms. It also investigates application value and performance-related parameters of compressive sensing in scenarios such as direction finding, spectrum monitoring, detection, and classification.
Group representations, error bases and quantum codes
Energy Technology Data Exchange (ETDEWEB)
Knill, E
1996-01-01
This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.
ECG biometric identification: A compression based approach.
Bras, Susana; Pinho, Armando J
2015-08-01
Using the electrocardiogram signal (ECG) to identify and/or authenticate persons are problems still lacking satisfactory solutions. Yet, ECG possesses characteristics that are unique or difficult to get from other signals used in biometrics: (1) it requires contact and liveliness for acquisition (2) it changes under stress, rendering it potentially useless if acquired under threatening. Our main objective is to present an innovative and robust solution to the above-mentioned problem. To successfully conduct this goal, we rely on information-theoretic data models for data compression and on similarity metrics related to the approximation of the Kolmogorov complexity. The proposed measure allows the comparison of two (or more) ECG segments, without having to follow traditional approaches that require heartbeat segmentation (described as highly influenced by external or internal interferences). As a first approach, the method was able to cluster the data in three groups: identical record, same participant, different participant, by the stratification of the proposed measure with values near 0 for the same participant and closer to 1 for different participants. A leave-one-out strategy was implemented in order to identify the participant in the database based on his/her ECG. A 1NN classifier was implemented, using as distance measure the method proposed in this work. The classifier was able to identify correctly almost all participants, with an accuracy of 99% in the database used.
EPC: A Provably Secure Permutation Based Compression Function
DEFF Research Database (Denmark)
Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid
2010-01-01
The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...
Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing
Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong
2016-08-01
Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.
Mojica, Edson; Pertuz, Said; Arguello, Henry
2017-12-01
One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.
Blind compressed sensing image reconstruction based on alternating direction method
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
A novel ECG data compression method based on adaptive Fourier decomposition
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
Worst configurations (instantons) for compressed sensing over reals: a channel coding approach
International Nuclear Information System (INIS)
Chertkov, Michael; Chilappagari, Shashi K.; Vasic, Bane
2010-01-01
We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.
Piecewise spectrally band-pass for compressive coded aperture spectral imaging
International Nuclear Information System (INIS)
Qian Lu-Lu; Lü Qun-Bo; Huang Min; Xiang Li-Bin
2015-01-01
Coded aperture snapshot spectral imaging (CASSI) has been discussed in recent years. It has the remarkable advantages of high optical throughput, snapshot imaging, etc. The entire spatial-spectral data-cube can be reconstructed with just a single two-dimensional (2D) compressive sensing measurement. On the other hand, for less spectrally sparse scenes, the insufficiency of sparse sampling and aliasing in spatial-spectral images reduce the accuracy of reconstructed three-dimensional (3D) spectral cube. To solve this problem, this paper extends the improved CASSI. A band-pass filter array is mounted on the coded mask, and then the first image plane is divided into some continuous spectral sub-band areas. The entire 3D spectral cube could be captured by the relative movement between the object and the instrument. The principle analysis and imaging simulation are presented. Compared with peak signal-to-noise ratio (PSNR) and the information entropy of the reconstructed images at different numbers of spectral sub-band areas, the reconstructed 3D spectral cube reveals an observable improvement in the reconstruction fidelity, with an increase in the number of the sub-bands and a simultaneous decrease in the number of spectral channels of each sub-band. (paper)
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Directory of Open Access Journals (Sweden)
Jin Li
2014-01-01
Full Text Available Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC combined with image data compression (IDC approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE. Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS-based algorithm has better compression performance than the traditional compression approaches.
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
A Novel Error Resilient Scheme for Wavelet-based Image Coding Over Packet Networks
WenZhu Sun; HongYu Wang; DaXing Qian
2012-01-01
this paper presents a robust transmission strategy for wavelet based scalable bit stream over packet erasure channel. By taking the advantage of the bit plane coding and the multiple description coding, the proposed strategy adopts layered multiple description coding (LMDC) for the embedded wavelet coders to improve the error resistant capability of the important bit planes in the meaning of D(R) function. Then, the post-compression rate-distortion (PCRD) optimization process is used to impro...
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Directory of Open Access Journals (Sweden)
Huiyan Jiang
2012-01-01
Full Text Available An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
A new hyperspectral image compression paradigm based on fusion
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Light-weight reference-based compression of FASTQ data.
Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan
2015-06-09
The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.
A hybrid video compression based on zerotree wavelet structure
International Nuclear Information System (INIS)
Kilic, Ilker; Yilmaz, Reyat
2009-01-01
A video compression algorithm comparable to the standard techniques at low bit rates is presented in this paper. The overlapping block motion compensation (OBMC) is combined with discrete wavelet transform which followed by Lloyd-Max quantization and zerotree wavelet (ZTW) structure. The novel feature of this coding scheme is the combination of hierarchical finite state vector quantization (HFSVQ) with the ZTW to encode the quantized wavelet coefficients. It is seen that the proposed video encoder (ZTW-HFSVQ) performs better than the MPEG-4 and Zerotree Entropy Coding (ZTE). (author)
Compressed Sensing-Based Direct Conversion Receiver
DEFF Research Database (Denmark)
Pierzchlewski, Jacek; Arildsen, Thomas; Larsen, Torben
2012-01-01
Due to the continuously increasing computational power of modern data receivers it is possible to move more and more processing from the analog to the digital domain. This paper presents a compressed sensing approach to relaxing the analog filtering requirements prior to the ADCs in a direct......-converted radio signals. As shown in an experiment presented in the article, when the proposed method is used, it is possible to relax the requirements for the quadrature down-converter filters. A random sampling device and an additional digital signal processing module is the price to pay for these relaxed...
Signal Recovery in Compressive Sensing via Multiple Sparsifying Bases
DEFF Research Database (Denmark)
Wijewardhana, U. L.; Belyaev, Evgeny; Codreanu, M.
2017-01-01
is sparse is the key assumption utilized by such algorithms. However, the basis in which the signal is the sparsest is unknown for many natural signals of interest. Instead there may exist multiple bases which lead to a compressible representation of the signal: e.g., an image is compressible in different...... wavelet transforms. We show that a significant performance improvement can be achieved by utilizing multiple estimates of the signal using sparsifying bases in the context of signal reconstruction from compressive samples. Further, we derive a customized interior-point method to jointly obtain multiple...... estimates of a 2-D signal (image) from compressive measurements utilizing multiple sparsifying bases as well as the fact that the images usually have a sparse gradient....
Wireless Sensor Networks Data Processing Summary Based on Compressive Sensing
Directory of Open Access Journals (Sweden)
Caiyun Huang
2014-07-01
Full Text Available As a newly proposed theory, compressive sensing (CS is commonly used in signal processing area. This paper investigates the applications of compressed sensing (CS in wireless sensor networks (WSNs. First, the development and research status of compressed sensing technology and wireless sensor networks are described, then a detailed investigation of WSNs research based on CS are conducted from aspects of data fusion, signal acquisition, signal routing transmission, and signal reconstruction. At the end of the paper, we conclude our survey and point out the possible future research directions.
Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain
Directory of Open Access Journals (Sweden)
Hsi-Chin Hsin
2012-01-01
Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.
Directory of Open Access Journals (Sweden)
Chen Chun
2008-03-01
Full Text Available Abstract Background With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. Results RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1 present a robust and effective way for RNA structural data compression; (2 design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. Conclusion A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool
1991-12-01
de produire des diagrammes d’ambigult& en forme de punaise avec de faibles lobes...particularit& de produire des diagrammes d’ambiguiLt6 en forme de punaise tout en ayant de faibles lobes secondaires. Ce rapport traite des diff6rentes m6thodes...les codes de Costas, les r~sultats obtenus mantrent que les codes de Golomb of frent de mains bonnes possibi- lit ~s que les codes de Welch pour
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923
QC-LDPC code-based cryptography
Baldi, Marco
2014-01-01
This book describes the fundamentals of cryptographic primitives based on quasi-cyclic low-density parity-check (QC-LDPC) codes, with a special focus on the use of these codes in public-key cryptosystems derived from the McEliece and Niederreiter schemes. In the first part of the book, the main characteristics of QC-LDPC codes are reviewed, and several techniques for their design are presented, while tools for assessing the error correction performance of these codes are also described. Some families of QC-LDPC codes that are best suited for use in cryptography are also presented. The second part of the book focuses on the McEliece and Niederreiter cryptosystems, both in their original forms and in some subsequent variants. The applicability of QC-LDPC codes in these frameworks is investigated by means of theoretical analyses and numerical tools, in order to assess their benefits and drawbacks in terms of system efficiency and security. Several examples of QC-LDPC code-based public key cryptosystems are prese...
Okasha , Nader M
2017-01-01
International audience; Concrete is recognized as the second most consumed product in our modern life after water. The variability in concrete properties is inevitable. The concrete mix is designed for a compressive strength that is different from, typically higher than, the value specified by the structural designer. Ways to calculate the compressive strength to be used in the mix design are provided in building and structural codes. These ways are all based on criteria related purely and on...
USING H.264/AVC-INTRA FOR DCT BASED SEGMENTATION DRIVEN COMPOUND IMAGE COMPRESSION
Directory of Open Access Journals (Sweden)
S. Ebenezer Juliet
2011-08-01
Full Text Available This paper presents a one pass block classification algorithm for efficient coding of compound images which consists of multimedia elements like text, graphics and natural images. The objective is to minimize the loss of visual quality of text during compression by separating text information which needs high special resolution than the pictures and background. It segments computer screen images into text/graphics and picture/background classes based on DCT energy in each 4x4 block, and then compresses both text/graphics pixels and picture/background blocks by H.264/AVC with variable quantization parameter. Experimental results show that the single H.264/AVC-INTRA coder with variable quantization outperforms single coders such as JPEG, JPEG-2000 for compound images. Also the proposed method improves the PSNR value significantly than standard JPEG, JPEG-2000 and while keeping competitive compression ratios.
Single exposure optically compressed imaging and visualization using random aperture coding
Energy Technology Data Exchange (ETDEWEB)
Stern, A [Electro Optical Unit, Ben Gurion University of the Negev, Beer-Sheva 84105 (Israel); Rivenson, Yair [Department of Electrical and Computer Engineering, Ben Gurion University of the Negev, Beer-Sheva 84105 (Israel); Javidi, Bahrain [Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut 06269-1157 (United States)], E-mail: stern@bgu.ac.il
2008-11-01
The common approach in digital imaging follows the sample-then-compress framework. According to this approach, in the first step as many pixels as possible are captured and in the second step the captured image is compressed by digital means. The recently introduced theory of compressed sensing provides the mathematical foundation necessary to combine these two steps in a single one, that is, to compress the information optically before it is recorded. In this paper we overview and extend an optical implementation of compressed sensing theory that we have recently proposed. With this new imaging approach the compression is accomplished inherently in the optical acquisition step. The primary feature of this imaging approach is a randomly encoded aperture realized by means of a random phase screen. The randomly encoded aperture implements random projection of the object field in the image plane. Using a single exposure, a randomly encoded image is captured which can be decoded by proper decoding algorithm.
Coding Transparency in Object-Based Video
DEFF Research Database (Denmark)
Aghito, Shankar Manuel; Forchhammer, Søren
2006-01-01
A novel algorithm for coding gray level alpha planes in object-based video is presented. The scheme is based on segmentation in multiple layers. Different coders are specifically designed for each layer. In order to reduce the bit rate, cross-layer redundancies as well as temporal correlation are...
3D Scan-Based Wavelet Transform and Quality Control for Video Coding
Directory of Open Access Journals (Sweden)
Parisot Christophe
2003-01-01
Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.
Block-based wavelet transform coding of mammograms with region-adaptive quantization
Moon, Nam Su; Song, Jun S.; Kwon, Musik; Kim, JongHyo; Lee, ChoongWoong
1998-06-01
To achieve both high compression ratio and information preserving, it is an efficient way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block- based wavelet transform coding is adopted and different peak- error-constrained quantizers are applied to blocks according to the segmentation result. In view of preservation of microcalcification, the proposed coding scheme shows better performance than JPEG.
Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding
Directory of Open Access Journals (Sweden)
Lerouge Sam
2007-01-01
Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4 .
Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding
Directory of Open Access Journals (Sweden)
Rik Van de Walle
2007-01-01
Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4%.
CCAN and TCAN - 1 1/2-D compressible-flow and time-dependent codes for conductor analysis
International Nuclear Information System (INIS)
Gierszewski, P.J.; Wan, A.S.; Yang, T.F.
1983-01-01
This report documents the computer programs CCAN (steady-state Compressible flow Conductor ANalysis) and TCAN (Time-dependent incompressible-flow Conductor ANalysis). These codes calculate temperature, pressure, power and other engineering quantities along the length of an actively-cooled electrical conductor. Present versions contain detailed property information for copper and aluminum conductors; and gaseous helium, liquid nitrogen and water coolants. CCAN and TCAN are available on the NMFECC CDC 7600
Optimal interference code based on machine learning
Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua
2016-10-01
In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.
Harnessing Disorder in Compression Based Nanofabrication
Engel, Clifford John
The future of nanotechnologies depends on the successful development of versatile, low-cost techniques for patterning micro- and nanoarchitectures. While most approaches to nanofabrication have focused primarily on making periodic structures at ever smaller length scales with an ultimate goal of massively scaling their production, I have focused on introducing control into relatively disordered nanofabrication systems. Well-ordered patterns are increasingly unnecessary for a growing range of applications, from anti-biofouling coatings to light trapping to omniphobic surfaces. The ability to manipulate disorder, at will and over multiple length scales, starting with the nanoscale, can open new prospects for textured substrates and unconventional applications. Taking advantage of previously considered defects; I have been able to develop nanofabrication techniques with potential for massive scalability and the incorporation into a wide range of potential application. This thesis first describes the manipulation of the non-Newtonian properties of liquid Ga and Ga alloys to confine the metal and metal alloys in gratings with sub-wavelength periodicities. Through a solid to liquid phase change, I was able to access the superior plasmonic properties of liquid Ga for the generation of surface plasmon polaritons (SPP). The switching contract between solid and liquid Ga confine in the nanogratings allowed for reversible manipulation of SPP properties through heating and cooling around the relatively low melting temperature of Ga (29.8 °C). The remaining chapters focus on the development and characterization of an all polymer wrinkle material system. Wrinkles, spontaneous disordered features that are produced in response to compressive force, are an ideal for a growing number of applications where fine feature control is no longer the main motivation. However the mechanical limitations of many wrinkle systems have restricted the potential applications of wrinkled surfaces
L. Anet Neto; P. Chanclou; Z. Tayq; B. C. Zabada; F. Saliou; G. Simon
2016-01-01
We experimentally assess compression with scalar and vector quantization for fixed-mobile convergent networks. We show that four-dimensional vector quantization allows 73% compression compliant with 3GPP EVM recommendations for transmissions over 25 km SSMF with 1:16 split ratio.
Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.
Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen
2014-02-01
The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.
A new DWT/MC/DPCM video compression framework based on EBCOT
Mei, L. M.; Wu, H. R.; Tan, D. M.
2005-07-01
A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.
Wavelet based multicarrier code division multiple access ...
African Journals Online (AJOL)
This paper presents the study on Wavelet transform based Multicarrier Code Division Multiple Access (MC-CDMA) system for a downlink wireless channel. The performance of the system is studied for Additive White Gaussian Noise Channel (AWGN) and slowly varying multipath channels. The bit error rate (BER) versus ...
Image content authentication based on channel coding
Zhang, Fan; Xu, Lei
2008-03-01
The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.
A design approach for systems based on magnetic pulse compression
International Nuclear Information System (INIS)
Praveen Kumar, D. Durga; Mitra, S.; Senthil, K.; Sharma, D. K.; Rajan, Rehim N.; Sharma, Archana; Nagesh, K. V.; Chakravarthy, D. P.
2008-01-01
A design approach giving the optimum number of stages in a magnetic pulse compression circuit and gain per stage is given. The limitation on the maximum gain per stage is discussed. The total system volume minimization is done by considering the energy storage capacitor volume and magnetic core volume at each stage. At the end of this paper, the design of a magnetic pulse compression based linear induction accelerator of 200 kV, 5 kA, and 100 ns with a repetition rate of 100 Hz is discussed with its experimental results
Disk-based compression of data from genome sequencing.
Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz
2015-05-01
High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Hyperspectral image compressing using wavelet-based method
Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng
2017-10-01
Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.
Measuring Modularity in Open Source Code Bases
Directory of Open Access Journals (Sweden)
Roberto Milev
2009-03-01
Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.
A compressed sensing based method with support refinement for impulse noise cancelation in DSL
Quadeer, Ahmed Abdul
2013-06-01
This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.
Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang
2016-09-22
To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.
An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Donghao Wang
2016-09-01
Full Text Available To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.
International Nuclear Information System (INIS)
Uwaba, Tomoyuki; Ito, Masahiro; Ukai, Shigeharu
2004-01-01
To analyze the wire-wrapped fast breeder reactor fuel pin bundle deformation under bundle/duct interaction conditions, the Japan Nuclear Cycle Development Institute has developed the BAMBOO computer code. This code uses the three-dimensional beam element to calculate fuel pin bowing and cladding oval distortion as the primary deformation mechanisms in a fuel pin bundle. The pin dispersion, which is disarrangement of pins in a bundle and would occur during irradiation, was modeled in this code to evaluate its effect on bundle deformation. By applying the contact analysis method commonly used in the finite element method, this model considers the contact conditions at various axial positions as well as the nodal points and can analyze the irregular arrangement of fuel pins with the deviation of the wire configuration.The dispersion model was introduced in the BAMBOO code and verified by using the results of the out-of-pile compression test of the bundle, where the dispersion was caused by the deviation of the wire position. And the effect of the dispersion on the bundle deformation was evaluated based on the analysis results of the code
Warped Discrete Cosine Transform-Based Low Bit-Rate Block Coding Using Image Downsampling
Directory of Open Access Journals (Sweden)
Ertürk Sarp
2007-01-01
Full Text Available This paper presents warped discrete cosine transform (WDCT-based low bit-rate block coding using image downsampling. While WDCT aims to improve the performance of conventional DCT by frequency warping, the WDCT has only been applicable to high bit-rate coding applications because of the overhead required to define the parameters of the warping filter. Recently, low bit-rate block coding based on image downsampling prior to block coding followed by upsampling after the decoding process is proposed to improve the compression performance for low bit-rate block coders. This paper demonstrates that a superior performance can be achieved if WDCT is used in conjunction with image downsampling-based block coding for low bit-rate applications.
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.
Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin
2016-01-01
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.
Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.
Directory of Open Access Journals (Sweden)
Yueying Wu
Full Text Available High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI extraction using the high efficiency video coding (H.265/HEVC standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0. The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.
A Complete Video Coding Chain Based on Multi-Dimensional Discrete Cosine Transform
Directory of Open Access Journals (Sweden)
T. Fryza
2010-09-01
Full Text Available The paper deals with a video compression method based on the multi-dimensional discrete cosine transform. In the text, the encoder and decoder architectures including the deﬁnitions of all mathematical operations like the forward and inverse 3-D DCT, quantization and thresholding are presented. According to the particular number of currently processed pictures, the new quantization tables and entropy code dictionaries are proposed in the paper. The practical properties of the 3-D DCT coding chain compared with the modern video compression methods (such as H.264 and WebM and the computing complexity are presented as well. It will be proved the best compress properties could be achieved by complex H.264 codec. On the other hand the computing complexity - especially on the encoding side - is lower for the 3-D DCT method.
Harmonic analysis in integrated energy system based on compressed sensing
International Nuclear Information System (INIS)
Yang, Ting; Pen, Haibo; Wang, Dan; Wang, Zhaoxia
2016-01-01
Highlights: • We propose a harmonic/inter-harmonic analysis scheme with compressed sensing theory. • Property of sparseness of harmonic signal in electrical power system is proved. • The ratio formula of fundamental and harmonic components sparsity is presented. • Spectral Projected Gradient-Fundamental Filter reconstruction algorithm is proposed. • SPG-FF enhances the precision of harmonic detection and signal reconstruction. - Abstract: The advent of Integrated Energy Systems enabled various distributed energy to access the system through different power electronic devices. The development of this has made the harmonic environment more complex. It needs low complexity and high precision of harmonic detection and analysis methods to improve power quality. To solve the shortages of large data storage capacities and high complexity of compression in sampling under the Nyquist sampling framework, this research paper presents a harmonic analysis scheme based on compressed sensing theory. The proposed scheme enables the performance of the functions of compressive sampling, signal reconstruction and harmonic detection simultaneously. In the proposed scheme, the sparsity of the harmonic signals in the base of the Discrete Fourier Transform (DFT) is numerically calculated first. This is followed by providing a proof of the matching satisfaction of the necessary conditions for compressed sensing. The binary sparse measurement is then leveraged to reduce the storage space in the sampling unit in the proposed scheme. In the recovery process, the scheme proposed a novel reconstruction algorithm called the Spectral Projected Gradient with Fundamental Filter (SPG-FF) algorithm to enhance the reconstruction precision. One of the actual microgrid systems is used as simulation example. The results of the experiment shows that the proposed scheme effectively enhances the precision of harmonic and inter-harmonic detection with low computing complexity, and has good
A joint multi-view plus depth image coding scheme based on 3D-warping
DEFF Research Database (Denmark)
Zamarin, Marco; Zanuttigh, Pietro; Milani, Simone
2011-01-01
on the scene structure that can be effectively exploited to improve the performance of multi-view coding schemes. In this paper we introduce a novel coding architecture that replaces the inter-view motion prediction operation with a 3D warping approach based on depth information to improve the coding......Free viewpoint video applications and autostereoscopic displays require the transmission of multiple views of a scene together with depth maps. Current compression and transmission solutions just handle these two data streams as separate entities. However, depth maps contain key information...
Mesh-based parallel code coupling interface
Energy Technology Data Exchange (ETDEWEB)
Wolf, K.; Steckel, B. (eds.) [GMD - Forschungszentrum Informationstechnik GmbH, St. Augustin (DE). Inst. fuer Algorithmen und Wissenschaftliches Rechnen (SCAI)
2001-04-01
MpCCI (mesh-based parallel code coupling interface) is an interface for multidisciplinary simulations. It provides industrial end-users as well as commercial code-owners with the facility to combine different simulation tools in one environment. Thereby new solutions for multidisciplinary problems will be created. This opens new application dimensions for existent simulation tools. This Book of Abstracts gives a short overview about ongoing activities in industry and research - all presented at the 2{sup nd} MpCCI User Forum in February 2001 at GMD Sankt Augustin. (orig.) [German] MpCCI (mesh-based parallel code coupling interface) definiert eine Schnittstelle fuer multidisziplinaere Simulationsanwendungen. Sowohl industriellen Anwender als auch kommerziellen Softwarehersteller wird mit MpCCI die Moeglichkeit gegeben, Simulationswerkzeuge unterschiedlicher Disziplinen miteinander zu koppeln. Dadurch entstehen neue Loesungen fuer multidisziplinaere Problemstellungen und fuer etablierte Simulationswerkzeuge ergeben sich neue Anwendungsfelder. Dieses Book of Abstracts bietet einen Ueberblick ueber zur Zeit laufende Arbeiten in der Industrie und in der Forschung, praesentiert auf dem 2{sup nd} MpCCI User Forum im Februar 2001 an der GMD Sankt Augustin. (orig.)
The development of fluid codes for the laser compression of plasma
International Nuclear Information System (INIS)
Nicholas, D.J.
1982-08-01
Notes are given on the construction and use of simulation codes in plasma physics requiring only a limited background knowledge in numerical analysis and finite-difference techniques. The development of a 1-D Eulerian codes to source form is followed as an example. (U.K.)
REMOTELY SENSEDC IMAGE COMPRESSION BASED ON WAVELET TRANSFORM
Directory of Open Access Journals (Sweden)
Heung K. Lee
1996-06-01
Full Text Available In this paper, we present an image compression algorithm that is capable of significantly reducing the vast mount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet trans-form to remove the spatial redundancy. The transformed images are than encoded by hilbert-curve scanning and run-length-encoding, followed by huffman coding. We also present the performance of the proposed algorithm with KITSAT-1 image as well as the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by peak signal to noise ratio (PSNR and classification capability.
Edge-Based Image Compression with Homogeneous Diffusion
Mainberger, Markus; Weickert, Joachim
It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.
A novel chaotic encryption scheme based on arithmetic coding
International Nuclear Information System (INIS)
Mi Bo; Liao Xiaofeng; Chen Yong
2008-01-01
In this paper, under the combination of arithmetic coding and logistic map, a novel chaotic encryption scheme is presented. The plaintexts are encrypted and compressed by using an arithmetic coder whose mapping intervals are changed irregularly according to a keystream derived from chaotic map and plaintext. Performance and security of the scheme are also studied experimentally and theoretically in detail
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
Review of design codes of concrete encased steel short columns under axial compression
Directory of Open Access Journals (Sweden)
K.Z. Soliman
2013-08-01
Full Text Available In recent years, the use of encased steel concrete columns has been increased significantly in medium-rise or high-rise buildings. The aim of the present investigation is to assess experimentally the current methods and codes for evaluating the ultimate load behavior of concrete encased steel short columns. The current state of design provisions for composite columns from the Egyptian codes ECP203-2007 and ECP-SC-LRFD-2012, as well as, American Institute of Steel Construction, AISC-LRFD-2010, American Concrete Institute, ACI-318-2008, and British Standard BS-5400-5 was reviewed. The axial capacity portion of both the encased steel section and the concrete section was also studied according to the previously mentioned codes. Ten encased steel concrete columns have been investigated experimentally to study the effect of concrete confinement and different types of encased steel sections. The measured axial capacity of the tested ten composite columns was compared with the values calculated by the above mentioned codes. It is concluded that non-negligible discrepancies exist between codes and the experimental results as the confinement effect was not considered in predicting both the strength and ductility of concrete. The confining effect was obviously influenced by the shape of the encased steel section. The tube-shaped steel section leads to better confinement than the SIB section. Among the used codes, the ECP-SC-LRFD-2012 led to the most conservative results.
Mechanical properties of tannin-based rigid foams undergoing compression
Energy Technology Data Exchange (ETDEWEB)
Celzard, A., E-mail: Alain.Celzard@enstib.uhp-nancy.fr [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Zhao, W. [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Pizzi, A. [ENSTIB-LERMAB, Nancy-University, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Fierro, V. [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France)
2010-06-25
The mechanical properties of a new class of extremely lightweight tannin-based materials, namely organic foams and their carbonaceous counterparts are detailed. Scaling laws are shown to describe correctly the observed behaviour. Information about the mechanical characteristics of the elementary forces acting within these solids is derived. It is suggested that organic materials present a rather bending-dominated behaviour and are partly plastic. On the contrary, carbon foams obtained by pyrolysis of the former present a fracture-dominated behaviour and are purely brittle. These conclusions are supported by the differences in the exponent describing the change of Young's modulus as a function of relative density, while that describing compressive strength is unchanged. Features of the densification strain also support such conclusions. Carbon foams of very low density may absorb high energy when compressed, making them valuable materials for crash protection.
Binaural model-based dynamic-range compression.
Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D
2018-01-26
Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.
Highly parallel line-based image coding for many cores.
Peng, Xiulian; Xu, Jizheng; Zhou, You; Wu, Feng
2012-01-01
Computers are developing along with a new trend from the dual-core and quad-core processors to ones with tens or even hundreds of cores. Multimedia, as one of the most important applications in computers, has an urgent need to design parallel coding algorithms for compression. Taking intraframe/image coding as a start point, this paper proposes a pure line-by-line coding scheme (LBLC) to meet the need. In LBLC, an input image is processed line by line sequentially, and each line is divided into small fixed-length segments. The compression of all segments from prediction to entropy coding is completely independent and concurrent at many cores. Results on a general-purpose computer show that our scheme can get a 13.9 times speedup with 15 cores at the encoder and a 10.3 times speedup at the decoder. Ideally, such near-linear speeding relation with the number of cores can be kept for more than 100 cores. In addition to the high parallelism, the proposed scheme can perform comparatively or even better than the H.264 high profile above middle bit rates. At near-lossless coding, it outperforms H.264 more than 10 dB. At lossless coding, up to 14% bit-rate reduction is observed compared with H.264 lossless coding at the high 4:4:4 profile.
Energy information data base: report number codes
Energy Technology Data Exchange (ETDEWEB)
None
1979-09-01
Each report processed by the US DOE Technical Information Center is identified by a unique report number consisting of a code plus a sequential number. In most cases, the code identifies the originating installation. In some cases, it identifies a specific program or a type of publication. Listed in this publication are all codes that have been used by DOE in cataloging reports. This compilation consists of two parts. Part I is an alphabetical listing of report codes identified with the issuing installations that have used the codes. Part II is an alphabetical listing of installations identified with codes each has used. (RWR)
Energy information data base: report number codes
International Nuclear Information System (INIS)
1979-09-01
Each report processed by the US DOE Technical Information Center is identified by a unique report number consisting of a code plus a sequential number. In most cases, the code identifies the originating installation. In some cases, it identifies a specific program or a type of publication. Listed in this publication are all codes that have been used by DOE in cataloging reports. This compilation consists of two parts. Part I is an alphabetical listing of report codes identified with the issuing installations that have used the codes. Part II is an alphabetical listing of installations identified with codes each has used
Identification of Coupled Map Lattice Based on Compressed Sensing
Directory of Open Access Journals (Sweden)
Dong Xie
2016-01-01
Full Text Available A novel approach for the parameter identification of coupled map lattice (CML based on compressed sensing is presented in this paper. We establish a meaningful connection between these two seemingly unrelated study topics and identify the weighted parameters using the relevant recovery algorithms in compressed sensing. Specifically, we first transform the parameter identification problem of CML into the sparse recovery problem of underdetermined linear system. In fact, compressed sensing provides a feasible method to solve underdetermined linear system if the sensing matrix satisfies some suitable conditions, such as restricted isometry property (RIP and mutual coherence. Then we give a low bound on the mutual coherence of the coefficient matrix generated by the observed values of CML and also prove that it satisfies the RIP from a theoretical point of view. If the weighted vector of each element is sparse in the CML system, our proposed approach can recover all the weighted parameters using only about M samplings, which is far less than the number of the lattice elements N. Another important and significant advantage is that if the observed data are contaminated with some types of noises, our approach is still effective. In the simulations, we mainly show the effects of coupling parameter and noise on the recovery rate.
1980-08-01
Compression factor and error sensitivity together with statistical data have also been tabulated. This TIB is a companion drcument to NCS TIB’s 79-7...vu donner la priorit6 pour lour r~alisation. Chaque application est conf ice A un " chef do projet", responsable successivoment do sa conception. de son...pilote depend des r~sultats obtenus et fait I’objet d’une d~cision- de ’.a Direction Gdnerale. Ndanmoins, le chef do projet doit dOs le d~part consid~rer
Incorporating Code-Based Software in an Introductory Statistics Course
Doehler, Kirsten; Taylor, Laura
2015-01-01
This article is based on the experiences of two statistics professors who have taught students to write and effectively utilize code-based software in a college-level introductory statistics course. Advantages of using software and code-based software in this context are discussed. Suggestions are made on how to ease students into using code with…
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node
Directory of Open Access Journals (Sweden)
Kan Luo
2018-01-01
Full Text Available Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS- based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node’s specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM, block sparse Bayesian learning (BSBL method, and discrete cosine transform (DCT basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.
Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.
Triboelectric-Based Transparent Secret Code.
Yuan, Zuqing; Du, Xinyu; Li, Nianwu; Yin, Yingying; Cao, Ran; Zhang, Xiuling; Zhao, Shuyu; Niu, Huidan; Jiang, Tao; Xu, Weihua; Wang, Zhong Lin; Li, Congju
2018-04-01
Private and security information for personal identification requires an encrypted tool to extend communication channels between human and machine through a convenient and secure method. Here, a triboelectric-based transparent secret code (TSC) that enables self-powered sensing and information identification simultaneously in a rapid process method is reported. The transparent and hydrophobic TSC can be conformed to any cambered surface due to its high flexibility, which extends the application scenarios greatly. Independent of the power source, the TSC can induce obvious electric signals only by surface contact. This TSC is velocity-dependent and capable of achieving a peak voltage of ≈4 V at a resistance load of 10 MΩ and a sliding speed of 0.1 m s -1 , according to a 2 mm × 20 mm rectangular stripe. The fabricated TSC can maintain its performance after reciprocating rolling for about 5000 times. The applications of TSC as a self-powered code device are demonstrated, and the ordered signals can be recognized through the height of the electric peaks, which can be further transferred into specific information by the processing program. The designed TSC has great potential in personal identification, commodity circulation, valuables management, and security defense applications.
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
Defining the best parallelization strategy for a diphasic compressible fluid mechanics code
International Nuclear Information System (INIS)
Berthou, Jean-Yves; Fayolle, Eric; Faucher, Eric; Scliffet, Laurent
2000-01-01
Nuclear plants use steam generator safety valves in order to regulate possible large pressure variations of fluids. In case of an incident these valves may be fed with pressurized liquid water (for instance a pressure of 9 MPa at a temperature of 300degC). When a pressurized liquid is submitted to a strong pressure drop, it will start evaporating. This phenomena is called flashing. Z. Bilicki and co-authors proposed the homogeneous relaxation model (HRM) to compute critical flashing water flows. Its computation in the case of non stationary one-dimensional flashing flows has been carried out with the development of a dedicated time dependent Finite Volume scheme based on a simplified version of the Godunov approach. Electricite De France Research and Development division have developed a monodimensional implementation of the HRM model: ECOSS, a 11000 lines FORTRAN 90. Applied to a shock tube test case with a 20000 elements monodimensional mesh, the simulation of the physical phenomenon during 2.5 seconds requires at least 100 days of computation on a SUN Sparc-Ultra60. This execution time justifies the ECOSS parallelization. Furthermore, we plan a modeling on 2D meshes for the next few years. Knowing that multiplying the mesh dimension by a factor 10 multiplies the execution time by a factor 100, ECOSS would take years of computation with small 2D meshes (1000 x 1000) on a conventional workstation. This paper describes the parallelization analysis we have conducted and we presents the experimental results we have obtained applying different programming model (MPI, OpenMP, HPF) on various platforms (a Compaq Proliant 6000 4 processors, a Cray T3E-750 300 processors, a HP class V 16 processors, a SGI Origin2000 32 processors, a cluster of PCs and a COMPAQ SC 232 processors). These experimental results will be discussed according to the following criteria: efficiency, salability, maintainability, developing costs and portability. As a conclusion, we will present the
Defining the best parallelization strategy for a diphasic compressible fluid mechanics code
Energy Technology Data Exchange (ETDEWEB)
Berthou, Jean-Yves; Fayolle, Eric [Electricite de France, Research and Development division, Modeling and Information Technologies Department, CLAMART CEDEX (France); Faucher, Eric; Scliffet, Laurent [Electricite de France, Research and Development Division, Mechanics and Component Technology Branch Department, Moret sur Loing (France)
2000-09-01
Nuclear plants use steam generator safety valves in order to regulate possible large pressure variations of fluids. In case of an incident these valves may be fed with pressurized liquid water (for instance a pressure of 9 MPa at a temperature of 300degC). When a pressurized liquid is submitted to a strong pressure drop, it will start evaporating. This phenomena is called flashing. Z. Bilicki and co-authors proposed the homogeneous relaxation model (HRM) to compute critical flashing water flows. Its computation in the case of non stationary one-dimensional flashing flows has been carried out with the development of a dedicated time dependent Finite Volume scheme based on a simplified version of the Godunov approach. Electricite De France Research and Development division have developed a monodimensional implementation of the HRM model: ECOSS, a 11000 lines FORTRAN 90. Applied to a shock tube test case with a 20000 elements monodimensional mesh, the simulation of the physical phenomenon during 2.5 seconds requires at least 100 days of computation on a SUN Sparc-Ultra60. This execution time justifies the ECOSS parallelization. Furthermore, we plan a modeling on 2D meshes for the next few years. Knowing that multiplying the mesh dimension by a factor 10 multiplies the execution time by a factor 100, ECOSS would take years of computation with small 2D meshes (1000 x 1000) on a conventional workstation. This paper describes the parallelization analysis we have conducted and we presents the experimental results we have obtained applying different programming model (MPI, OpenMP, HPF) on various platforms (a Compaq Proliant 6000 4 processors, a Cray T3E-750 300 processors, a HP class V 16 processors, a SGI Origin2000 32 processors, a cluster of PCs and a COMPAQ SC 232 processors). These experimental results will be discussed according to the following criteria: efficiency, salability, maintainability, developing costs and portability. As a conclusion, we will present the
Compressive strength and hydrolytic stability of fly ash based geopolymers
Directory of Open Access Journals (Sweden)
Nikolić Irena
2013-01-01
Full Text Available The process of geopolymerization involves the reaction of solid aluminosilicate materials with highly alkaline silicate solution yielding an aluminosilicate inorganic polymer named geopolymer, which may be successfully applied in civil engineering as a replacement for cement. In this paper we have investigated the influence of synthesis parameters: solid to liquid ratio, NaOH concentration and the ratio of Na2SiO3/NaOH, on the mechanical properties and hydrolytic stability of fly ash based geopolymers in distilled water, sea water and simulated acid rain. The highest value of compressive strength was obtained using 10 mol dm-3 NaOH and at the Na2SiO3/NaOH ratio of 1.5. Moreover, the results have shown that mechanical properties of fly ash based geopolymers are in correlation with their hydrolytic stability. Factors that increase the compressive strength also increase the hydrolytic stability of fly ash based geopolymers. The best hydrolytic stability of fly ash based geopolymers was shown in sea water while the lowest stability was recorded in simulated acid rain. [Projekat Ministarstva nauke Republike Srbije, br. 172054 i Nanotechnology and Functional Materials Center, funded by the European FP7 project No. 245916
Astronomical Image Compression Techniques Based on ACC and KLT Coder
Directory of Open Access Journals (Sweden)
J. Schindler
2011-01-01
Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.
Study and analysis of wavelet based image compression techniques
African Journals Online (AJOL)
user
Discrete Wavelet Transform (DWT) is a recently developed compression ... serve emerging areas of mobile multimedia and internet communication, ..... In global thresholding the best trade-off between PSNR and compression is provided by.
Protograph-Based Raptor-Like Codes
Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.
2014-01-01
Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.
Optimal Reliability-Based Code Calibration
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro
1994-01-01
Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....
ROI-based DICOM image compression for telemedicine
Indian Academy of Sciences (India)
ground and reconstruct the image portions losslessly. The compressed image can ... If the image is compressed by 8:1 compression without any perceptual distortion, the ... Figure 2. Cross-sectional view of medical image (statistical representation). ... The Integer Wavelet Transform (IWT) is used to have lossless processing.
Quantum BCH Codes Based on Spectral Techniques
International Nuclear Information System (INIS)
Guo Ying; Zeng Guihua
2006-01-01
When the time variable in quantum signal processing is discrete, the Fourier transform exists on the vector space of n-tuples over the Galois field F 2 , which plays an important role in the investigation of quantum signals. By using Fourier transforms, the idea of quantum coding theory can be described in a setting that is much different from that seen that far. Quantum BCH codes can be defined as codes whose quantum states have certain specified consecutive spectral components equal to zero and the error-correcting ability is also described by the number of the consecutive zeros. Moreover, the decoding of quantum codes can be described spectrally with more efficiency.
Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach
Danyali, Habibiollah; Mertins, Alfred
2011-01-01
In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653
Depth-based Multi-View 3D Video Coding
DEFF Research Database (Denmark)
Zamarin, Marco
techniques are used to extract dense motion information and generate improved candidate side information. Multiple candidates are merged employing multi-hypothesis strategies. Promising rate-distortion performance improvements compared with state-of-the-art Wyner-Ziv decoders are reported, both when texture......-view video. Depth maps are typically used to synthesize the desired output views, and the performance of view synthesis algorithms strongly depends on the accuracy of depth information. In this thesis, novel algorithms for efficient depth map compression in MVD scenarios are proposed, with particular focus...... on edge-preserving solutions. In a proposed scheme, texture-depth correlation is exploited to predict surface shapes in the depth signal. In this way depth coding performance can be improved in terms of both compression gain and edge-preservation. Another solution proposes a new intra coding mode targeted...
Geothermally Coupled Well-Based Compressed Air Energy Storage
Energy Technology Data Exchange (ETDEWEB)
Davidson, C L [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bearden, Mark D [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horner, Jacob A [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Appriou, Delphine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McGrail, B Peter [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-12-01
. This project assessed the technical and economic feasibility of implementing geothermally coupled well-based CAES for grid-scale energy storage. Based on an evaluation of design specifications for a range of casing grades common in U.S. oil and gas fields, a 5-MW CAES project could be supported by twenty to twenty-five 5,000-foot, 7-inch wells using lower-grade casing, and as few as eight such wells for higher-end casing grades. Using this information, along with data on geothermal resources, well density, and potential future markets for energy storage systems, The Geysers geothermal field was selected to parameterize a case study to evaluate the potential match between the proven geothermal resource present at The Geysers and the field’s existing well infrastructure. Based on calculated wellbore compressed air mass, the study shows that a single average geothermal production well could provide enough geothermal energy to support a 15.4-MW (gross) power generation facility using 34 to 35 geothermal wells repurposed for compressed air storage, resulting in a simplified levelized cost of electricity (sLCOE) estimated at 11.2 ¢/kWh (Table S.1). Accounting for the power loss to the geothermal power project associated with diverting geothermal resources for air heating results in a net 2-MW decrease in generation capacity, increasing the CAES project’s sLCOE by 1.8 ¢/kWh.
Geothermally Coupled Well-Based Compressed Air Energy Storage
Energy Technology Data Exchange (ETDEWEB)
Davidson, Casie L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bearden, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horner, Jacob A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cabe, James E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Appriou, Delphine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McGrail, B. Peter [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-12-20
. This project assessed the technical and economic feasibility of implementing geothermally coupled well-based CAES for grid-scale energy storage. Based on an evaluation of design specifications for a range of casing grades common in U.S. oil and gas fields, a 5-MW CAES project could be supported by twenty to twenty-five 5,000-foot, 7-inch wells using lower-grade casing, and as few as eight such wells for higher-end casing grades. Using this information, along with data on geothermal resources, well density, and potential future markets for energy storage systems, The Geysers geothermal field was selected to parameterize a case study to evaluate the potential match between the proven geothermal resource present at The Geysers and the field’s existing well infrastructure. Based on calculated wellbore compressed air mass, the study shows that a single average geothermal production well could provide enough geothermal energy to support a 15.4-MW (gross) power generation facility using 34 to 35 geothermal wells repurposed for compressed air storage, resulting in a simplified levelized cost of electricity (sLCOE) estimated at 11.2 ¢/kWh (Table S.1). Accounting for the power loss to the geothermal power project associated with diverting geothermal resources for air heating results in a net 2-MW decrease in generation capacity, increasing the CAES project’s sLCOE by 1.8 ¢/kWh.
Compression-based aggregation model for medical web services.
Al-Shammary, Dhiah; Khalil, Ibrahim
2010-01-01
Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.
Non-binary unitary error bases and quantum codes
Energy Technology Data Exchange (ETDEWEB)
Knill, E.
1996-06-01
Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.
D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.
Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B
2018-01-01
Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).
Compressive sensing based wireless sensor for structural health monitoring
Bao, Yuequan; Zou, Zilong; Li, Hui
2014-03-01
Data loss is a common problem for monitoring systems based on wireless sensors. Reliable communication protocols, which enhance communication reliability by repetitively transmitting unreceived packets, is one approach to tackle the problem of data loss. An alternative approach allows data loss to some extent and seeks to recover the lost data from an algorithmic point of view. Compressive sensing (CS) provides such a data loss recovery technique. This technique can be embedded into smart wireless sensors and effectively increases wireless communication reliability without retransmitting the data. The basic idea of CS-based approach is that, instead of transmitting the raw signal acquired by the sensor, a transformed signal that is generated by projecting the raw signal onto a random matrix, is transmitted. Some data loss may occur during the transmission of this transformed signal. However, according to the theory of CS, the raw signal can be effectively reconstructed from the received incomplete transformed signal given that the raw signal is compressible in some basis and the data loss ratio is low. This CS-based technique is implemented into the Imote2 smart sensor platform using the foundation of Illinois Structural Health Monitoring Project (ISHMP) Service Tool-suite. To overcome the constraints of limited onboard resources of wireless sensor nodes, a method called random demodulator (RD) is employed to provide memory and power efficient construction of the random sampling matrix. Adaptation of RD sampling matrix is made to accommodate data loss in wireless transmission and meet the objectives of the data recovery. The embedded program is tested in a series of sensing and communication experiments. Examples and parametric study are presented to demonstrate the applicability of the embedded program as well as to show the efficacy of CS-based data loss recovery for real wireless SHM systems.
Verification-Based Interval-Passing Algorithm for Compressed Sensing
Wu, Xiaofu; Yang, Zhen
2013-01-01
We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...
Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.
2018-05-01
One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.
A Framework for Reverse Engineering Large C++ Code Bases
Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian
2009-01-01
When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and
A Framework for Reverse Engineering Large C++ Code Bases
Telea, Alexandru; Byelas, Heorhiy; Voinea, Lucian
2008-01-01
When assessing the quality and maintainability of large C++ code bases, tools are needed for extracting several facts from the source code, such as: architecture, structure, code smells, and quality metrics. Moreover, these facts should be presented in such ways so that one can correlate them and
State of art in FE-based fuel performance codes
International Nuclear Information System (INIS)
Kim, Hyo Chan; Yang, Yong Sik; Kim, Dae Ho; Bang, Je Geon; Kim, Sun Ki; Koo, Yang Hyun
2013-01-01
Fuel performance codes approximate this complex behavior using an axisymmetric, axially-stacked, one-dimensional radial representation to save computation cost. However, the need for improved modeling of PCMI and, particularly, the importance of multidimensional capability for accurate fuel performance simulation has been identified as safety margin decreases. Finite element (FE) method that is reliable and proven solution in mechanical field has been introduced into fuel performance codes for multidimensional analysis. The present state of the art in numerical simulation of FE-based fuel performance predominantly involves 2-D axisymmetric model and 3-D volumetric model. The FRAPCON and FRAPTRAN own 1.5-D and 2-D FE model to simulate PCMI and cladding ballooning. In 2-D simulation, the FALCON code, developed by EPRI, is a 2-D (R-Z and R-θ) fully thermal-mechanically coupled steady-state and transient FE-based fuel behavior code. The French codes TOUTATIS and ALCYONE which are 3-D, and typically used to investigate localized behavior. In 2008, the Idaho National Laboratory (INL) has been developing multidimensional (2-D and 3-D) nuclear fuel performance code called BISON. In this paper, the current state of FE-based fuel performance code and their models are presented. Based on investigation into the codes, requirements and direction of development for new FE-based fuel performance code can be discussed. Based on comparison of models in FE-based fuel performance code, status of art in the codes can be discussed. A new FE-based fuel performance code should include typical pellet and cladding models which all codes own. In particular, specified pellet and cladding model such as gaseous swelling and high burnup structure (HBS) model should be developed to improve accuracy of code as well as consider AC condition. To reduce computation cost, the approximated gap and the optimized contact model should be also developed
Adaptive learning compressive tracking based on Markov location prediction
Zhou, Xingyu; Fu, Dongmei; Yang, Tao; Shi, Yanan
2017-03-01
Object tracking is an interdisciplinary research topic in image processing, pattern recognition, and computer vision which has theoretical and practical application value in video surveillance, virtual reality, and automatic navigation. Compressive tracking (CT) has many advantages, such as efficiency and accuracy. However, when there are object occlusion, abrupt motion and blur, similar objects, and scale changing, the CT has the problem of tracking drift. We propose the Markov object location prediction to get the initial position of the object. Then CT is used to locate the object accurately, and the classifier parameter adaptive updating strategy is given based on the confidence map. At the same time according to the object location, extract the scale features, which is able to deal with object scale variations effectively. Experimental results show that the proposed algorithm has better tracking accuracy and robustness than current advanced algorithms and achieves real-time performance.
Control volume based modelling of compressible flow in reciprocating machines
DEFF Research Database (Denmark)
Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik
2004-01-01
, and multidimensional effects must be calculated using empirical correlations; correlations for steady state flow can be used as an approximation. A transformation that assumes ideal gas is presented for transforming equations for masses and energies in control volumes into the corresponding pressures and temperatures......An approach to modelling unsteady compressible flow that is primarily one dimensional is presented. The approach was developed for creating distributed models of machines with reciprocating pistons but it is not limited to this application. The approach is based on the integral form of the unsteady...... conservation laws for mass, energy, and momentum applied to a staggered mesh consisting of two overlapping strings of control volumes. Loss mechanisms can be included directly in the governing equations of models by including them as terms in the conservation laws. Heat transfer, flow friction...
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
Sandia National Laboratories analysis code data base
Energy Technology Data Exchange (ETDEWEB)
Peterson, C.W.
1994-11-01
Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.
Sandia National Laboratories analysis code data base
Peterson, C. W.
1994-11-01
Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.
Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J
2017-10-05
A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.
Edge-preserving Intra mode for efficient depth map coding based on H.264/AVC
DEFF Research Database (Denmark)
Zamarin, Marco; Forchhammer, Søren
2014-01-01
Depth-image-based-rendering (DIBR) algorithms for 3D video communication systems based on the “multi-view video plus depth” format are very sensitive to the accuracy of depth information. Specifically, edge regions in the depth data should be preserved in the coding/decoding process to ensure good...... view synthesis performance, which directly affects the overall system performance. This paper proposes a novel scheme for edge-aware Intra depth compression based on the H.264/AVC framework enabled on both Intra (I) and Inter (P) slices. The proposed scheme includes a new Intra mode specifically...
Directory of Open Access Journals (Sweden)
Joshua S. Ullom
2012-01-01
Full Text Available A method for improving the contrast-to-noise ratio (CNR while maintaining the −6 dB axial resolution of ultrasonic B-mode images is proposed. The technique proposed is known as eREC-FC, which enhances a recently developed REC-FC technique. REC-FC is a combination of the coded excitation technique known as resolution enhancement compression (REC and the speckle-reduction technique frequency compounding (FC. In REC-FC, image CNR is improved but at the expense of a reduction in axial resolution. However, by compounding various REC-FC images made from various subband widths, the tradeoff between axial resolution and CNR enhancement can be extended. Further improvements in CNR can be obtained by applying postprocessing despeckling filters to the eREC-FC B-mode images. The despeckling filters evaluated were the following: median, Lee, homogeneous mask area, geometric, and speckle-reducing anisotropic diffusion (SRAD. Simulations and experimental measurements were conducted with a single-element transducer (f/2.66 having a center frequency of 2.25 MHz and a −3 dB bandwidth of 50%. In simulations and experiments, the eREC-FC technique resulted in the same axial resolution that would be typically observed with conventional excitation with a pulse. Moreover, increases in CNR of 348% were obtained in experiments when comparing eREC-FC with a Lee filter to conventional pulsing methods.
Learning-based compressed sensing for infrared image super resolution
Zhao, Yao; Sui, Xiubao; Chen, Qian; Wu, Shaochi
2016-05-01
This paper presents an infrared image super-resolution method based on compressed sensing (CS). First, the reconstruction model under the CS framework is established and a Toeplitz matrix is selected as the sensing matrix. Compared with traditional learning-based methods, the proposed method uses a set of sub-dictionaries instead of two coupled dictionaries to recover high resolution (HR) images. And Toeplitz sensing matrix allows the proposed method time-efficient. Second, all training samples are divided into several feature spaces by using the proposed adaptive k-means classification method, which is more accurate than the standard k-means method. On the basis of this approach, a complex nonlinear mapping from the HR space to low resolution (LR) space can be converted into several compact linear mappings. Finally, the relationships between HR and LR image patches can be obtained by multi-sub-dictionaries and HR infrared images are reconstructed by the input LR images and multi-sub-dictionaries. The experimental results show that the proposed method is quantitatively and qualitatively more effective than other state-of-the-art methods.
The possibilities of compressed-sensing-based Kirchhoff prestack migration
Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali
2014-01-01
An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.
The possibilities of compressed-sensing-based Kirchhoff prestack migration
Aldawood, Ali
2014-05-08
An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.
Context based Coding of Quantized Alpha Planes for Video Objects
DEFF Research Database (Denmark)
Aghito, Shankar Manuel; Forchhammer, Søren
2002-01-01
In object based video, each frame is a composition of objects that are coded separately. The composition is performed through the alpha plane that represents the transparency of the object. We present an alternative to MPEG-4 for coding of alpha planes that considers their specific properties....... Comparisons in terms of rate and distortion are provided, showing that the proposed coding scheme for still alpha planes is better than the algorithms for I-frames used in MPEG-4....
Statistics-Based Compression of Global Wind Fields
Jeong, Jaehong
2017-02-07
Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth\\'s orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.
Vibration-based monitoring and diagnostics using compressive sensing
Ganesan, Vaahini; Das, Tuhin; Rahnavard, Nazanin; Kauffman, Jeffrey L.
2017-04-01
Vibration data from mechanical systems carry important information that is useful for characterization and diagnosis. Standard approaches rely on continually streaming data at a fixed sampling frequency. For applications involving continuous monitoring, such as Structural Health Monitoring (SHM), such approaches result in high volume data and rely on sensors being powered for prolonged durations. Furthermore, for spatial resolution, structures are instrumented with a large array of sensors. This paper shows that both volume of data and number of sensors can be reduced significantly by applying Compressive Sensing (CS) in vibration monitoring applications. The reduction is achieved by using random sampling and capitalizing on the sparsity of vibration signals in the frequency domain. Preliminary experimental results validating CS-based frequency recovery are also provided. By exploiting the sparsity of mode shapes, CS can also enable efficient spatial reconstruction using fewer spatially distributed sensors. CS can thereby reduce the cost and power requirement of sensing as well as streamline data storage and processing in monitoring applications. In well-instrumented structures, CS can enable continued monitoring in case of sensor or computational failures.
Statistics-Based Compression of Global Wind Fields
Jeong, Jaehong; Castruccio, Stefano; Crippa, Paola; Genton, Marc G.
2017-01-01
Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth's orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with vari...
State of art in FE-based fuel performance codes
International Nuclear Information System (INIS)
Kim, Hyo Chan; Yang, Yong Sik; Kim, Dae Ho; Bang, Je Geon; Kim, Sun Ki; Koo, Yang Hyun
2013-01-01
Finite element (FE) method that is reliable and proven solution in mechanical field has been introduced into fuel performance codes for multidimensional analysis. The present state of the art in numerical simulation of FE-based fuel performance predominantly involves 2-D axisymmetric model and 3-D volumetric model. The FRAPCON and FRAPTRAN own 1.5-D and 2-D FE model to simulate PCMI and cladding ballooning. In 2-D simulation, the FALCON code, developed by EPRI, is a 2-D (R-Z and R-θ) fully thermal-mechanically coupled steady-state and transient FE-based fuel behavior code. The French codes TOUTATIS and ALCYONE which are 3-D, and typically used to investigate localized behavior. In 2008, the Idaho National Laboratory (INL) has been developing multidimensional (2-D and 3-D) nuclear fuel performance code called BISON. In this paper, the current state of FE-based fuel performance code and their models are presented. Based on investigation into the codes, requirements and direction of development for new FE-based fuel performance code can be discussed. Based on comparison of models in FE-based fuel performance code, status of art in the codes can be discussed. A new FE-based fuel performance code should include typical pellet and cladding models which all codes own. In particular, specified pellet and cladding model such as gaseous swelling and high burnup structure (HBS) model should be developed to improve accuracy of code as well as consider AC condition. To reduce computation cost, the approximated gap and the optimized contact model should be also developed. Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena, occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. This multiphysics behavior is often tightly coupled, a well known example being the thermomechanical behavior. Adding to this complexity, important aspects of fuel behavior are inherently
Energy Technology Data Exchange (ETDEWEB)
Shih, Tzu-Ching [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, 40402, Taiwan (China); Chen, Jeon-Hor; Nie Ke; Lin Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying [Tu and Yuen Center for Functional Onco-Imaging and Radiological Sciences, University of California, Irvine, CA 92697 (United States); Liu Dongxu; Sun Lizhi, E-mail: shih@mail.cmu.edu.t [Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 (United States)
2010-07-21
This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo (registered) 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc (registered) software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under
DEFF Research Database (Denmark)
Misaridis, Thanasis; Jensen, Jørgen Arendt
1999-01-01
This paper presents a coded excitation imaging system based on a predistorted FM excitation and a digital compression filter designed for medical ultrasonic applications, in order to preserve both axial resolution and contrast. In radars, optimal Chebyshev windows efficiently weight a nearly...... as with pulse excitation (about 1.5 lambda), depending on the filter design criteria. The axial sidelobes are below -40 dB, which is the noise level of the measuring imaging system. The proposed excitation/compression scheme shows good overall performance and stability to the frequency shift due to attenuation...... be removed by weighting. We show that by using a predistorted chirp with amplitude or phase shaping for amplitude ripple reduction and a correlation filter that accounts for the transducer's natural frequency weighting, output sidelobe levels of -35 to -40 dB are directly obtained. When an optimized filter...
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
Highly compressible and all-solid-state supercapacitors based on nanostructured composite sponge.
Niu, Zhiqiang; Zhou, Weiya; Chen, Xiaodong; Chen, Jun; Xie, Sishen
2015-10-21
Based on polyaniline-single-walled carbon nanotubes -sponge electrodes, highly compressible all-solid-state supercapacitors are prepared with an integrated configuration using a poly(vinyl alcohol) (PVA)/H2 SO4 gel as the electrolyte. The unique configuration enables the resultant supercapacitors to be compressed as an integrated unit arbitrarily during 60% compressible strain. Furthermore, the performance of the resultant supercapacitors is nearly unchanged even under 60% compressible strain. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Central Decoding for Multiple Description Codes based on Domain Partitioning
Directory of Open Access Journals (Sweden)
M. Spiertz
2006-01-01
Full Text Available Multiple Description Codes (MDC can be used to trade redundancy against packet loss resistance for transmitting data over lossy diversity networks. In this work we focus on MD transform coding based on domain partitioning. Compared to Vaishampayan’s quantizer based MDC, domain based MD coding is a simple approach for generating different descriptions, by using different quantizers for each description. Commonly, only the highest rate quantizer is used for reconstruction. In this paper we investigate the benefit of using the lower rate quantizers to enhance the reconstruction quality at decoder side. The comparison is done on artificial source data and on image data.
SeqCompress: an algorithm for biological sequence compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan
2014-10-01
The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
A Review On Segmentation Based Image Compression Techniques
Directory of Open Access Journals (Sweden)
S.Thayammal
2013-11-01
Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.
Compression-based geometric pattern discovery in music
DEFF Research Database (Denmark)
Meredith, David
2014-01-01
The purpose of musical analysis is to find the best possible explanations for musical objects, where such objects may range from single chords or phrases to entire musical corpora. Kolmogorov complexity theory suggests that the best possible explanation for an object is represented by the shortest...... possible description of it. Two compression algorithms, COSIATEC and SIATECCompress, are described that take point-set representations of musical objects as input and generate compressed encodings of these point sets as output. The algorithms were evaluated on a task in which 360 folk songs were classified...
Context based Coding of Binary Shapes by Object Boundary Straightness Analysis
DEFF Research Database (Denmark)
Aghito, Shankar Manuel; Forchhammer, Søren
2004-01-01
A new lossless compression scheme for bilevel images targeted at binary shapes of image and video objects is presented. The scheme is based on a local analysis of the digital straightness of the causal part of the object boundary, which is used in the context definition for arithmetic encoding....... Tested on individual images of binary shapes and binary layers of digital maps the algorithm outperforms PWC, JBIG and MPEG-4 CAE. On the binary shapes the code lengths are reduced by 21%, 25%, and 42%, respectively. On the maps the reductions are 34%, 32%, and 59%, respectively. The algorithm is also...
Kohei Arai; Yuji Yamada
2011-01-01
An attempt is made for improvement of secret image invisibility in circulation images with dyadic wavelet based data hiding with run-length coded secret images of which location of codes are determined by random number. Through experiments, it is confirmed that secret images are almost invisible in circulation images. Also robustness of the proposed data hiding method against data compression of circulation images is discussed. Data hiding performance in terms of invisibility of secret images...
Time-lens based optical packet pulse compression and retiming
DEFF Research Database (Denmark)
Laguardia Areal, Janaina; Hu, Hao; Palushani, Evarist
2010-01-01
recovery, resulting in a potentially very efficient solution. The scheme uses a time-lens, implemented through a sinusoidally driven optical phase modulation, combined with a linear dispersion element. As time-lenses are also used for pulse compression, we design the circuit also to perform pulse...
Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing
2013-04-01
34 in Proc. IEEE Radar Conf, Rome, Italy , May 2008. [17] M. G. Amin, F. Ahmad, W. Zhang, "A compressive sensing approach to moving target... Ferrara , J. Jackson, and M. Stuff, "Three-dimensional sparse-aperture moving-target imaging," in Proc. SPIE, vol. 6970, 2008. [43] M. Skolnik (Ed
Directory of Open Access Journals (Sweden)
Shailesh Kamble
2017-08-01
Full Text Available The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS block matching algorithm and weighted finite automata (WFA coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC. WFA represents an image (frame or motion compensated prediction error based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS, Three-Step Search (TSS, and Efficient Three-Step Search (ETSS block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD and average search points required per frame. Mean of absolute difference (MAD distortion function is used as the block distortion measure (BDM. Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed
Unfolding code for neutron spectrometry based on neural nets technology
International Nuclear Information System (INIS)
Ortiz R, J. M.; Vega C, H. R.
2012-10-01
The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the R obust Design of Artificial Neural Networks Methodology . The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6 Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)
Unfolding code for neutron spectrometry based on neural nets technology
Energy Technology Data Exchange (ETDEWEB)
Ortiz R, J. M.; Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Apdo. Postal 336, 98000 Zacatecas (Mexico)
2012-10-15
The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the {sup R}obust Design of Artificial Neural Networks Methodology{sup .} The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a {sup 6}Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)
Cyclops: single-pixel imaging lidar system based on compressive sensing
Magalhães, F.; Correia, M. V.; Farahi, F.; Pereira do Carmo, J.; Araújo, F. M.
2017-11-01
Mars and the Moon are envisaged as major destinations of future space exploration missions in the upcoming decades. Imaging LIDARs are seen as a key enabling technology in the support of autonomous guidance, navigation and control operations, as they can provide very accurate, wide range, high-resolution distance measurements as required for the exploration missions. Imaging LIDARs can be used at critical stages of these exploration missions, such as descent and selection of safe landing sites, rendezvous and docking manoeuvres, or robotic surface navigation and exploration. Despite these devices have been commercially available and used for long in diverse metrology and ranging applications, their size, mass and power consumption are still far from being suitable and attractive for space exploratory missions. Here, we describe a compact Single-Pixel Imaging LIDAR System that is based on a compressive sensing technique. The application of the compressive codes to a DMD array enables compression of the spatial information, while the collection of timing histograms correlated to the pulsed laser source ensures image reconstruction at the ranged distances. Single-pixel cameras have been compared with raster scanning and array based counterparts in terms of noise performance, and proved to be superior. Since a single photodetector is used, a better SNR and higher reliability is expected in contrast with systems using large format photodetector arrays. Furthermore, the event of failure of one or more micromirror elements in the DMD does not prevent full reconstruction of the images. This brings additional robustness to the proposed 3D imaging LIDAR. The prototype that was implemented has three modes of operation. Range Finder: outputs the average distance between the system and the area of the target under illumination; Attitude Meter: provides the slope of the target surface based on distance measurements in three areas of the target; 3D Imager: produces 3D ranged
Wang, Jianji; Zheng, Nanning
2013-09-01
Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.
Machine function based control code algebras
Bergstra, J.A.
Machine functions have been introduced by Earley and Sturgis in [6] in order to provide a mathematical foundation of the use of the T-diagrams proposed by Bratman in [5]. Machine functions describe the operation of a machine at a very abstract level. A theory of hardware and software based on
32Still Image Compression Algorithm Based on Directional Filter Banks
Chunling Yang; Duanwu Cao; Li Ma
2010-01-01
Hybrid wavelet and directional filter banks (HWD) is an effective multi-scale geometrical analysis method. Compared to wavelet transform, it can better capture the directional information of images. But the ringing artifact, which is caused by the coefficient quantization in transform domain, is the biggest drawback of image compression algorithms in HWD domain. In this paper, by researching on the relationship between directional decomposition and ringing artifact, an improved decomposition ...
Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong
2017-07-01
The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.
Hamming Code Based Watermarking Scheme for 3D Model Verification
Directory of Open Access Journals (Sweden)
Jen-Tse Wang
2014-01-01
Full Text Available Due to the explosive growth of the Internet and maturing of 3D hardware techniques, protecting 3D objects becomes a more and more important issue. In this paper, a public hamming code based fragile watermarking technique is proposed for 3D objects verification. An adaptive watermark is generated from each cover model by using the hamming code technique. A simple least significant bit (LSB substitution technique is employed for watermark embedding. In the extraction stage, the hamming code based watermark can be verified by using the hamming code checking without embedding any verification information. Experimental results shows that 100% vertices of the cover model can be watermarked, extracted, and verified. It also shows that the proposed method can improve security and achieve low distortion of stego object.
COSINE software development based on code generation technology
International Nuclear Information System (INIS)
Ren Hao; Mo Wentao; Liu Shuo; Zhao Guang
2013-01-01
The code generation technology can significantly improve the quality and productivity of software development and reduce software development risk. At present, the code generator is usually based on UML model-driven technology, which can not satisfy the development demand of nuclear power calculation software. The feature of scientific computing program was analyzed and the FORTRAN code generator (FCG) based on C# was developed in this paper. FCG can generate module variable definition FORTRAN code automatically according to input metadata. FCG also can generate memory allocation interface for dynamic variables as well as data access interface. FCG was applied to the core and system integrated engine for design and analysis (COSINE) software development. The result shows that FCG can greatly improve the development efficiency of nuclear power calculation software, and reduce the defect rate of software development. (authors)
A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map
International Nuclear Information System (INIS)
Xiao Di; Cai Hong-Kun; Zheng Hong-Ying
2015-01-01
In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. (paper)
Research of Block-Based Motion Estimation Methods for Video Compression
Directory of Open Access Journals (Sweden)
Tropchenko Andrey
2016-08-01
Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.
Novel power saving architecture for FBG based OCDMA code generation
Osadola, Tolulope B.; Idris, Siti K.; Glesk, Ivan
2013-10-01
A novel architecture for generating incoherent, 2-dimensional wavelength hopping-time spreading optical CDMA codes is presented. The architecture is designed to facilitate the reuse of optical source signal that is unused after an OCDMA code has been generated using fiber Bragg grating based encoders. Effective utilization of available optical power is therefore achieved by cascading several OCDMA encoders thereby enabling 3dB savings in optical power.
A Psychoacoustic-Based Multiple Audio Object Coding Approach via Intra-Object Sparsity
Directory of Open Access Journals (Sweden)
Maoshen Jia
2017-12-01
Full Text Available Rendering spatial sound scenes via audio objects has become popular in recent years, since it can provide more flexibility for different auditory scenarios, such as 3D movies, spatial audio communication and virtual classrooms. To facilitate high-quality bitrate-efficient distribution for spatial audio objects, an encoding scheme based on intra-object sparsity (approximate k-sparsity of the audio object itself is proposed in this paper. The statistical analysis is presented to validate the notion that the audio object has a stronger sparseness in the Modified Discrete Cosine Transform (MDCT domain than in the Short Time Fourier Transform (STFT domain. By exploiting intra-object sparsity in the MDCT domain, multiple simultaneously occurring audio objects are compressed into a mono downmix signal with side information. To ensure a balanced perception quality of audio objects, a Psychoacoustic-based time-frequency instants sorting algorithm and an energy equalized Number of Preserved Time-Frequency Bins (NPTF allocation strategy are proposed, which are employed in the underlying compression framework. The downmix signal can be further encoded via Scalar Quantized Vector Huffman Coding (SQVH technique at a desirable bitrate, and the side information is transmitted in a lossless manner. Both objective and subjective evaluations show that the proposed encoding scheme outperforms the Sparsity Analysis (SPA approach and Spatial Audio Object Coding (SAOC in cases where eight objects were jointly encoded.
Loss less real-time data compression based on LZO for steady-state Tokamak DAS
International Nuclear Information System (INIS)
Pujara, H.D.; Sharma, Manika
2008-01-01
The evolution of data acquisition system (DAS) for steady-state operation of Tokamak has been technology driven. Steady-state Tokamak demands a data acquisition system which is capable enough to acquire data losslessly from diagnostics. The needs of loss less continuous acquisition have a significant effect on data storage and takes up a greater portion of any data acquisition systems. Another basic need of steady state of nature of operation demands online viewing of data which loads the LAN significantly. So there is strong demand for something that would control the expansion of both these portion by a way of employing compression technique in real time. This paper presents a data acquisition systems employing real-time data compression technique based on LZO. It is a data compression library which is suitable for data compression and decompression in real time. The algorithm used favours speed over compression ratio. The system has been rigged up based on PXI bus and dual buffer mode architecture is implemented for loss less acquisition. The acquired buffer is compressed in real time and streamed to network and hard disk for storage. Observed performance of measure on various data type like binary, integer float, types of different type of wave form as well as compression timing overheads has been presented in the paper. Various software modules for real-time acquiring, online viewing of data on network nodes have been developed in LabWindows/CVI based on client server architecture
Hejranfar, Kazem; Parseh, Kaveh
2017-09-01
The preconditioned characteristic boundary conditions based on the artificial compressibility (AC) method are implemented at artificial boundaries for the solution of two- and three-dimensional incompressible viscous flows in the generalized curvilinear coordinates. The compatibility equations and the corresponding characteristic variables (or the Riemann invariants) are mathematically derived and then applied as suitable boundary conditions in a high-order accurate incompressible flow solver. The spatial discretization of the resulting system of equations is carried out by the fourth-order compact finite-difference (FD) scheme. In the preconditioning applied here, the value of AC parameter in the flow field and also at the far-field boundary is automatically calculated based on the local flow conditions to enhance the robustness and performance of the solution algorithm. The code is fully parallelized using the Concurrency Runtime standard and Parallel Patterns Library (PPL) and its performance on a multi-core CPU is analyzed. The incompressible viscous flows around a 2-D circular cylinder, a 2-D NACA0012 airfoil and also a 3-D wavy cylinder are simulated and the accuracy and performance of the preconditioned characteristic boundary conditions applied at the far-field boundaries are evaluated in comparison to the simplified boundary conditions and the non-preconditioned characteristic boundary conditions. It is indicated that the preconditioned characteristic boundary conditions considerably improve the convergence rate of the solution of incompressible flows compared to the other boundary conditions and the computational costs are significantly decreased.
Effect of compression on reactivity of plutonium based materials
International Nuclear Information System (INIS)
Marshall, A.C.; Marotta, C.R.
1977-01-01
An analysis was made to determine if criticality could occur due to compression of bare spheres of Pu and PuO 2 (solid or powdered) during a high-speed impact accident of an air transportable plutonium package. It was calculated that an initial k/sub eff/ less than 0.70 would not result in a critical condition (less than 0.97); thus, a conservative max permissible design value of k/sub eff/ for a Pu air transportable package is 0.70
PERMUTATION-BASED POLYMORPHIC STEGO-WATERMARKS FOR PROGRAM CODES
Directory of Open Access Journals (Sweden)
Denys Samoilenko
2016-06-01
Full Text Available Purpose: One of the most actual trends in program code protection is code marking. The problem consists in creation of some digital “watermarks” which allow distinguishing different copies of the same program codes. Such marks could be useful for authority protection, for code copies numbering, for program propagation monitoring, for information security proposes in client-server communication processes. Methods: We used the methods of digital steganography adopted for program codes as text objects. The same-shape symbols method was transformed to same-semantic element method due to codes features which makes them different from ordinary texts. We use dynamic principle of marks forming making codes similar to be polymorphic. Results: We examined the combinatorial capacity of permutations possible in program codes. As a result it was shown that the set of 5-7 polymorphic variables is suitable for the most modern network applications. Marks creation and restoration algorithms where proposed and discussed. The main algorithm is based on full and partial permutations in variables names and its declaration order. Algorithm for partial permutation enumeration was optimized for calculation complexity. PHP code fragments which realize the algorithms were listed. Discussion: Methodic proposed in the work allows distinguishing of each client-server connection. In a case if a clone of some network resource was found the methodic could give information about included marks and thereby data on IP, date and time, authentication information of client copied the resource. Usage of polymorphic stego-watermarks should improve information security indexes in network communications.
International Nuclear Information System (INIS)
Kim, K. D.; Jung, J. J.; Lee, S. W.; Cho, B. O.; Ji, S. K.; Kim, Y. H.; Seong, C. K.
2002-01-01
A coupled code system of RETRAN/MASTER has been developed for best-estimate simulations of interactions between reactor core neutron kinetics and plant thermal-hydraulics by incorporation of a 3-D reactor core kinetics analysis code, MASTER into system transient code, RETRAN. The soundness of the consolidated code system is confirmed by simulating the MSLB benchmark problem developed to verify the performance of a coupled kinetics and system transient codes by OECD/NEA
Uwaba, Tomoyuki; Ito, Masahiro; Nemoto, Junichi; Ichikawa, Shoichi; Katsuyama, Kozo
2014-09-01
The BAMBOO computer code was verified by results for the out-of-pile bundle compression test with large diameter pin bundle deformation under the bundle-duct interaction (BDI) condition. The pin diameters of the examined test bundles were 8.5 mm and 10.4 mm, which are targeted as preliminary fuel pin diameters for the upgraded core of the prototype fast breeder reactor (FBR) and for demonstration and commercial FBRs studied in the FaCT project. In the bundle compression test, bundle cross-sectional views were obtained from X-ray computer tomography (CT) images and local parameters of bundle deformation such as pin-to-duct and pin-to-pin clearances were measured by CT image analyses. In the verification, calculation results of bundle deformation obtained by the BAMBOO code analyses were compared with the experimental results from the CT image analyses. The comparison showed that the BAMBOO code reasonably predicts deformation of large diameter pin bundles under the BDI condition by assuming that pin bowing and cladding oval distortion are the major deformation mechanisms, the same as in the case of small diameter pin bundles. In addition, the BAMBOO analysis results confirmed that cladding oval distortion effectively suppresses BDI in large diameter pin bundles as well as in small diameter pin bundles.
Energy Technology Data Exchange (ETDEWEB)
Uwaba, Tomoyuki, E-mail: uwaba.tomoyuki@jaea.go.jp [Japan Atomic Energy Agency, 4002, Narita-cho, Oarai-machi, Ibaraki 311-1393 (Japan); Ito, Masahiro; Nemoto, Junichi [Japan Atomic Energy Agency, 4002, Narita-cho, Oarai-machi, Ibaraki 311-1393 (Japan); Ichikawa, Shoichi [Japan Atomic Energy Agency, 2-1, Shiraki, Tsuruga-shi, Fukui 919-1279 (Japan); Katsuyama, Kozo [Japan Atomic Energy Agency, 4002, Narita-cho, Oarai-machi, Ibaraki 311-1393 (Japan)
2014-09-15
The BAMBOO computer code was verified by results for the out-of-pile bundle compression test with large diameter pin bundle deformation under the bundle–duct interaction (BDI) condition. The pin diameters of the examined test bundles were 8.5 mm and 10.4 mm, which are targeted as preliminary fuel pin diameters for the upgraded core of the prototype fast breeder reactor (FBR) and for demonstration and commercial FBRs studied in the FaCT project. In the bundle compression test, bundle cross-sectional views were obtained from X-ray computer tomography (CT) images and local parameters of bundle deformation such as pin-to-duct and pin-to-pin clearances were measured by CT image analyses. In the verification, calculation results of bundle deformation obtained by the BAMBOO code analyses were compared with the experimental results from the CT image analyses. The comparison showed that the BAMBOO code reasonably predicts deformation of large diameter pin bundles under the BDI condition by assuming that pin bowing and cladding oval distortion are the major deformation mechanisms, the same as in the case of small diameter pin bundles. In addition, the BAMBOO analysis results confirmed that cladding oval distortion effectively suppresses BDI in large diameter pin bundles as well as in small diameter pin bundles.
Design of Packet-Based Block Codes with Shift Operators
Directory of Open Access Journals (Sweden)
Jacek Ilow
2010-01-01
Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of k information packets to construct r redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of k information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of n=k+r received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.
Design of Packet-Based Block Codes with Shift Operators
Directory of Open Access Journals (Sweden)
Ilow Jacek
2010-01-01
Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of information packets to construct redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.
Cloud solution for histopathological image analysis using region of interest based compression.
Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana
2017-07-01
Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.
Multiple component codes based generalized LDPC codes for high-speed optical transport.
Djordjevic, Ivan B; Wang, Ting
2014-07-14
A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.
Biometric and Emotion Identification: An ECG Compression Based Method
Directory of Open Access Journals (Sweden)
Susana Brás
2018-04-01
Full Text Available We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG. The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1 conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2 conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3 identification of the ECG record class, using a 1-NN (nearest neighbor classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.
Biometric and Emotion Identification: An ECG Compression Based Method.
Brás, Susana; Ferreira, Jacqueline H T; Soares, Sandra C; Pinho, Armando J
2018-01-01
We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.
Biometric and Emotion Identification: An ECG Compression Based Method
Brás, Susana; Ferreira, Jacqueline H. T.; Soares, Sandra C.; Pinho, Armando J.
2018-01-01
We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model. PMID:29670564
Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos
Directory of Open Access Journals (Sweden)
Geeta Kasana
2016-02-01
Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.
Context-Aware Image Compression.
Directory of Open Access Journals (Sweden)
Jacky C K Chan
Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.
Curvelet-based compressive sensing for InSAR raw data
Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David
2015-10-01
The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications
A neutron spectrum unfolding code based on iterative procedures
International Nuclear Information System (INIS)
Ortiz R, J. M.; Vega C, H. R.
2012-10-01
In this work, the version 3.0 of the neutron spectrum unfolding code called Neutron Spectrometry and Dosimetry from Universidad Autonoma de Zacatecas (NSDUAZ), is presented. This code was designed in a graphical interface under the LabVIEW programming environment and it is based on the iterative SPUNIT iterative algorithm, using as entrance data, only the rate counts obtained with 7 Bonner spheres based on a 6 Lil(Eu) neutron detector. The main features of the code are: it is intuitive and friendly to the user; it has a programming routine which automatically selects the initial guess spectrum by using a set of neutron spectra compiled by the International Atomic Energy Agency. Besides the neutron spectrum, this code calculates the total flux, the mean energy, H(10), h(10), 15 dosimetric quantities for radiation protection porpoises and 7 survey meter responses, in four energy grids, based on the International Atomic Energy Agency compilation. This code generates a full report in html format with all relevant information. In this work, the neutron spectrum of a 241 AmBe neutron source on air, located at 150 cm from detector, is unfolded. (Author)
Kumar, Ashish; Kumar, Manjeet; Komaragiri, Rama
2018-04-19
Bradycardia can be modulated using the cardiac pacemaker, an implantable medical device which sets and balances the patient's cardiac health. The device has been widely used to detect and monitor the patient's heart rate. The data collected hence has the highest authenticity assurance and is convenient for further electric stimulation. In the pacemaker, ECG detector is one of the most important element. The device is available in its new digital form, which is more efficient and accurate in performance with the added advantage of economical power consumption platform. In this work, a joint algorithm based on biorthogonal wavelet transform and run-length encoding (RLE) is proposed for QRS complex detection of the ECG signal and compressing the detected ECG data. Biorthogonal wavelet transform of the input ECG signal is first calculated using a modified demand based filter bank architecture which consists of a series combination of three lowpass filters with a highpass filter. Lowpass and highpass filters are realized using a linear phase structure which reduces the hardware cost of the proposed design approximately by 50%. Then, the location of the R-peak is found by comparing the denoised ECG signal with the threshold value. The proposed R-peak detector achieves the highest sensitivity and positive predictivity of 99.75 and 99.98 respectively with the MIT-BIH arrhythmia database. Also, the proposed R-peak detector achieves a comparatively low data error rate (DER) of 0.002. The use of RLE for the compression of detected ECG data achieves a higher compression ratio (CR) of 17.1. To justify the effectiveness of the proposed algorithm, the results have been compared with the existing methods, like Huffman coding/simple predictor, Huffman coding/adaptive, and slope predictor/fixed length packaging.
Underwater Acoustic Matched Field Imaging Based on Compressed Sensing
Directory of Open Access Journals (Sweden)
Huichen Yan
2015-10-01
Full Text Available Matched field processing (MFP is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method.
Content Progressive Coding of Limited Bits/pixel Images
DEFF Research Database (Denmark)
Jensen, Ole Riis; Forchhammer, Søren
1999-01-01
A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF.......A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF....
StirMark Benchmark: audio watermarking attacks based on lossy compression
Steinebach, Martin; Lang, Andreas; Dittmann, Jana
2002-04-01
StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.
International Nuclear Information System (INIS)
Enderle, G.
1979-01-01
The computer-code FLUST-2D is able to calculate the two-dimensional flow of a compressible fluid in arbitrary coupled rectangular areas. In a finite-difference scheme the program computes pressure, density, internal energy and velocity. Starting with a basic set of equations, the difference equations in a rectangular grid are developed. The computational cycle for coupled fluid areas is described. Results of test calculations are compared to analytical solutions and the influence of time step and mesh size are investigated. The program was used to precalculate the blowdown experiments of the HDR experimental program. Downcomer, plena, internal vessel region, blowdown pipe and a containment area have been modelled two-dimensionally. The major results of the precalculations are presented. This report also contains a description of the code structure and user information. (orig.) [de
Trellis-coded CPM for satellite-based mobile communications
Abrishamkar, Farrokh; Biglieri, Ezio
1988-01-01
Digital transmission for satellite-based land mobile communications is discussed. To satisfy the power and bandwidth limitations imposed on such systems, a combination of trellis coding and continuous-phase modulated signals are considered. Some schemes based on this idea are presented, and their performance is analyzed by computer simulation. The results obtained show that a scheme based on directional detection and Viterbi decoding appears promising for practical applications.
Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB
Directory of Open Access Journals (Sweden)
Wei Jin
2015-01-01
Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.
Image coding based on maximum entropy partitioning for identifying ...
Indian Academy of Sciences (India)
A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization ...
FARO base case post-test analysis by COMETA code
Energy Technology Data Exchange (ETDEWEB)
Annunziato, A.; Addabbo, C. [Joint Research Centre, Ispra (Italy)
1995-09-01
The paper analyzes the COMETA (Core Melt Thermal-Hydraulic Analysis) post test calculations of FARO Test L-11, the so-called Base Case Test. The FARO Facility, located at JRC Ispra, is used to simulate the consequences of Severe Accidents in Nuclear Power Plants under a variety of conditions. The COMETA Code has a 6 equations two phase flow field and a 3 phases corium field: the jet, the droplets and the fused-debris bed. The analysis shown that the code is able to pick-up all the major phenomena occurring during the fuel-coolant interaction pre-mixing phase.
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
Guo, Wei; Tse, Peter W.
2013-01-01
Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect
Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise
2018-05-01
Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.
JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding
Directory of Open Access Journals (Sweden)
Thomas André
2007-03-01
Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.
JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding
Directory of Open Access Journals (Sweden)
André Thomas
2007-01-01
Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.
Hybrid Video Coding Based on Bidimensional Matching Pursuit
Directory of Open Access Journals (Sweden)
Lorenzo Granai
2004-12-01
Full Text Available Hybrid video coding combines together two stages: first, motion estimation and compensation predict each frame from the neighboring frames, then the prediction error is coded, reducing the correlation in the spatial domain. In this work, we focus on the latter stage, presenting a scheme that profits from some of the features introduced by the standard H.264/AVC for motion estimation and replaces the transform in the spatial domain. The prediction error is so coded using the matching pursuit algorithm which decomposes the signal over an appositely designed bidimensional, anisotropic, redundant dictionary. Comparisons are made among the proposed technique, H.264, and a DCT-based coding scheme. Moreover, we introduce fast techniques for atom selection, which exploit the spatial localization of the atoms. An adaptive coding scheme aimed at optimizing the resource allocation is also presented, together with a rate-distortion study for the matching pursuit algorithm. Results show that the proposed scheme outperforms the standard DCT, especially at very low bit rates.
International Nuclear Information System (INIS)
Ranjbar, Navid; Mehrali, Mehdi; Behnia, Arash; Alengaram, U. Johnson; Jumaat, Mohd Zamin
2014-01-01
Highlights: • Results show POFA is adaptable as replacement in FA based geopolymer mortar. • The increase in POFA/FA ratio delay of the compressive development of geopolymer. • The density of POFA based geoploymer is lower than FA based geopolymer mortar. - Abstract: This paper presents the effects and adaptability of palm oil fuel ash (POFA) as a replacement material in fly ash (FA) based geopolymer mortar from the aspect of microstructural and compressive strength. The geopolymers developed were synthesized with a combination of sodium hydroxide and sodium silicate as activator and POFA and FA as high silica–alumina resources. The development of compressive strength of POFA/FA based geopolymers was investigated using X-ray florescence (XRF), X-ray diffraction (XRD), Fourier transform infrared (FTIR), and field emission scanning electron microscopy (FESEM). It was observed that the particle shapes and surface area of POFA and FA as well as chemical composition affects the density and compressive strength of the mortars. The increment in the percentages of POFA increased the silica/alumina (SiO 2 /Al 2 O 3 ) ratio and that resulted in reduction of the early compressive strength of the geopolymer and delayed the geopolymerization process
Energy Technology Data Exchange (ETDEWEB)
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF
Directory of Open Access Journals (Sweden)
Shikang Kong
2017-02-01
Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.
The Formation and Evolution of Shear Bands in Plane Strain Compressed Nickel-Base Superalloy
Directory of Open Access Journals (Sweden)
Bin Tang
2018-02-01
Full Text Available The formation and evolution of shear bands in Inconel 718 nickel-base superalloy under plane strain compression was investigated in the present work. It is found that the propagation of shear bands under plane strain compression is more intense in comparison with conventional uniaxial compression. The morphology of shear bands was identified to generally fall into two categories: in “S” shape at severe conditions (low temperatures and high strain rates and “X” shape at mild conditions (high temperatures and low strain rates. However, uniform deformation at the mesoscale without shear bands was also obtained by compressing at 1050 °C/0.001 s−1. By using the finite element method (FEM, the formation mechanism of the shear bands in the present study was explored for the special deformation mode of plane strain compression. Furthermore, the effect of processing parameters, i.e., strain rate and temperature, on the morphology and evolution of shear bands was discussed following a phenomenological approach. The plane strain compression attempt in the present work yields important information for processing parameters optimization and failure prediction under plane strain loading conditions of the Inconel 718 superalloy.
GRADSPMHD: A parallel MHD code based on the SPH formalism
Vanaverbeke, S.; Keppens, R.; Poedts, S.
2014-03-01
We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a
Seismic Analysis Code (SAC): Development, porting, and maintenance within a legacy code base
Savage, B.; Snoke, J. A.
2017-12-01
The Seismic Analysis Code (SAC) is the result of toil of many developers over almost a 40-year history. Initially a Fortran-based code, it has undergone major transitions in underlying bit size from 16 to 32, in the 1980s, and 32 to 64 in 2009; as well as a change in language from Fortran to C in the late 1990s. Maintenance of SAC, the program and its associated libraries, have tracked changes in hardware and operating systems including the advent of Linux in the early 1990, the emergence and demise of Sun/Solaris, variants of OSX processors (PowerPC and x86), and Windows (Cygwin). Traces of these systems are still visible in source code and associated comments. A major concern while improving and maintaining a routinely used, legacy code is a fear of introducing bugs or inadvertently removing favorite features of long-time users. Prior to 2004, SAC was maintained and distributed by LLNL (Lawrence Livermore National Lab). In that year, the license was transferred from LLNL to IRIS (Incorporated Research Institutions for Seismology), but the license is not open source. However, there have been thousands of downloads a year of the package, either source code or binaries for specific system. Starting in 2004, the co-authors have maintained the SAC package for IRIS. In our updates, we fixed bugs, incorporated newly introduced seismic analysis procedures (such as EVALRESP), added new, accessible features (plotting and parsing), and improved the documentation (now in HTML and PDF formats). Moreover, we have added modern software engineering practices to the development of SAC including use of recent source control systems, high-level tests, and scripted, virtualized environments for rapid testing and building. Finally, a "sac-help" listserv (administered by IRIS) was setup for SAC-related issues and is the primary avenue for users seeking advice and reporting bugs. Attempts are always made to respond to issues and bugs in a timely fashion. For the past thirty-plus years
Multiple Description Coding Based on Optimized Redundancy Removal for 3D Depth Map
Directory of Open Access Journals (Sweden)
Sen Han
2016-06-01
Full Text Available Multiple description (MD coding is a promising alternative for the robust transmission of information over error-prone channels. In 3D image technology, the depth map represents the distance between the camera and objects in the scene. Using the depth map combined with the existing multiview image, it can be efficient to synthesize images of any virtual viewpoint position, which can display more realistic 3D scenes. Differently from the conventional 2D texture image, the depth map contains a lot of spatial redundancy information, which is not necessary for view synthesis, but may result in the waste of compressed bits, especially when using MD coding for robust transmission. In this paper, we focus on the redundancy removal of MD coding based on the DCT (discrete cosine transform domain. In view of the characteristics of DCT coefficients, at the encoder, a Lagrange optimization approach is designed to determine the amounts of high frequency coefficients in the DCT domain to be removed. It is noted considering the low computing complexity that the entropy is adopted to estimate the bit rate in the optimization. Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost. The experimental results have shown that compared to the corresponding scheme, the proposed method demonstrates better rate central and side distortion performance.
On Applicability of Network Coding Technique for 6LoWPAN-based Sensor Networks.
Amanowicz, Marek; Krygier, Jaroslaw
2018-05-26
In this paper, the applicability of the network coding technique in 6LoWPAN-based sensor multihop networks is examined. The 6LoWPAN is one of the standards proposed for the Internet of Things architecture. Thus, we can expect the significant growth of traffic in such networks, which can lead to overload and decrease in the sensor network lifetime. The authors propose the inter-session network coding mechanism that can be implemented in resource-limited sensor motes. The solution reduces the overall traffic in the network, and in consequence, the energy consumption is decreased. Used procedures take into account deep header compressions of the native 6LoWPAN packets and the hop-by-hop changes of the header structure. Applied simplifications reduce signaling traffic that is typically occurring in network coding deployments, keeping the solution usefulness for the wireless sensor networks with limited resources. The authors validate the proposed procedures in terms of end-to-end packet delay, packet loss ratio, traffic in the air, total energy consumption, and network lifetime. The solution has been tested in a real wireless sensor network. The results confirm the efficiency of the proposed technique, mostly in delay-tolerant sensor networks.
Compression of surface myoelectric signals using MP3 encoding.
Chan, Adrian D C
2011-01-01
The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).
The Physics of Compressive Sensing and the Gradient-Based Recovery Algorithms
Dai, Qi; Sha, Wei
2009-01-01
The physics of compressive sensing (CS) and the gradient-based recovery algorithms are presented. First, the different forms for CS are summarized. Second, the physical meanings of coherence and measurement are given. Third, the gradient-based recovery algorithms and their geometry explanations are provided. Finally, we conclude the report and give some suggestion for future work.
Chloride transport under compressive load in bacteria-based self-healing concrete
Binti Md Yunus, B.; Schlangen, E.; Jonkers, H.M.
2015-01-01
An experiment was carried out in this study to investigate the effect of compressive load on chloride penetration in self-healing concrete containing bacterial-based healing agent. Bacteria-based healing agent with the fraction of 2 mm – 4 mm of particles sizes were used in this contribution. ESEM
The PHREEQE Geochemical equilibrium code data base and calculations
International Nuclear Information System (INIS)
Andersoon, K.
1987-01-01
Compilation of a thermodynamic data base for actinides and fission products for use with PHREEQE has begun and a preliminary set of actinide data has been tested for the PHREEQE code in a version run on an IBM XT computer. The work until now has shown that the PHREEQE code mostly gives satisfying results for specification of actinides in natural water environment. For U and Np under oxidizing conditions, however, the code has difficulties to converge with pH and Eh conserved when a solubility limit is applied. For further calculations of actinide and fission product specification and solubility in a waste repository and in the surrounding geosphere, more data are needed. It is necessary to evaluate the influence of the large uncertainties of some data. A quality assurance and a check on the consistency of the data base is also needed. Further work with data bases should include: an extension to fission products, an extension to engineering materials, an extension to other ligands than hydroxide and carbonate, inclusion of more mineral phases, inclusion of enthalpy data, a control of primary references in order to decide if values from different compilations are taken from the same primary reference and contacts and discussions with other groups, working with actinide data bases, e.g. at the OECD/NEA and at the IAEA. (author)
OTDM-WDM Conversion Based on Time-Domain Optical Fourier Transformation with Spectral Compression
DEFF Research Database (Denmark)
Mulvad, Hans Christian Hansen; Palushani, Evarist; Galili, Michael
2011-01-01
We propose a scheme enabling direct serial-to-parallel conversion of OTDM data tributaries onto a WDM grid, based on optical Fourier transformation with spectral compression. Demonstrations on 320 Gbit/s and 640 Gbit/s OTDM data are shown.......We propose a scheme enabling direct serial-to-parallel conversion of OTDM data tributaries onto a WDM grid, based on optical Fourier transformation with spectral compression. Demonstrations on 320 Gbit/s and 640 Gbit/s OTDM data are shown....
Beam steering performance of compressed Luneburg lens based on transformation optics
Gao, Ju; Wang, Cong; Zhang, Kuang; Hao, Yang; Wu, Qun
2018-06-01
In this paper, two types of compressed Luneburg lenses based on transformation optics are investigated and simulated using two different sources, namely, waveguides and dipoles, which represent plane and spherical wave sources, respectively. We determined that the largest beam steering angle and the related feed point are intrinsic characteristics of a certain type of compressed Luneburg lens, and that the optimized distance between the feed and lens, gain enhancement, and side-lobe suppression are related to the type of source. Based on our results, we anticipate that these lenses will prove useful in various future antenna applications.
Design of an Ultra-wideband Pseudo Random Coded MIMO Radar Based on Radio Frequency Switches
Directory of Open Access Journals (Sweden)
Su Hai
2017-02-01
Full Text Available A Multiple-Input Multiple-Output (MIMO ultra-wideband radar can detect the range and azimuth information of targets in real time. It is widely used for geological surveys, life rescue, through-wall tracking, and other military or civil fields. This paper presents the design of an ultra-wideband pseudo random coded MIMO radar that is based on Radio Frequency (RF switches and implements a MIMO radar system. RF switches are employed to reduce cost and complexity of the system. As the switch pressure value is limited, the peak power of the transmitting signal is 18 dBm. The ultra-wideband radar echo is obtained by hybrid sampling, and pulse compression is computed by Digital Signal Processors (DSPs embedded in an Field-Programmable Gate Array (FPGA to simplify the signal process. The experiment illustrates that the radar system can detect the range and azimuth information of targets in real time.
Scintillator Based Coded-Aperture Imaging for Neutron Detection
International Nuclear Information System (INIS)
Hayes, Sean-C.; Gamage, Kelum-A-A.
2013-06-01
In this paper we are going to assess the variations of neutron images using a series of Monte Carlo simulations. We are going to study neutron images of the same neutron source with different source locations, using a scintillator based coded-aperture system. The Monte Carlo simulations have been conducted making use of the EJ-426 neutron scintillator detector. This type of detector has a low sensitivity to gamma rays and is therefore of particular use in a system with a source that emits a mixed radiation field. From the use of different source locations, several neutron images have been produced, compared both qualitatively and quantitatively for each case. This allows conclusions to be drawn on how suited the scintillator based coded-aperture neutron imaging system is to detecting various neutron source locations. This type of neutron imaging system can be easily used to identify and locate nuclear materials precisely. (authors)
Finger Vein Recognition Based on Local Directional Code
Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2012-01-01
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194
Finger Vein Recognition Based on Local Directional Code
Directory of Open Access Journals (Sweden)
Rongyang Xiao
2012-11-01
Full Text Available Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP, Local Derivative Pattern (LDP and Local Line Binary Pattern (LLBP. However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD, this paper represents a new direction based local descriptor called Local Directional Code (LDC and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.
Automatic Structure-Based Code Generation from Coloured Petri Nets
DEFF Research Database (Denmark)
Kristensen, Lars Michael; Westergaard, Michael
2010-01-01
Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs...... (PP-CPNs) which is a subclass of CPNs equipped with an explicit separation of process control flow, message passing, and access to shared and local data. We show how PP-CPNs caters for a four phase structure-based automatic code generation process directed by the control flow of processes....... The viability of our approach is demonstrated by applying it to automatically generate an Erlang implementation of the Dynamic MANET On-demand (DYMO) routing protocol specified by the Internet Engineering Task Force (IETF)....
A Secure Network Coding Based on Broadcast Encryption in SDN
Directory of Open Access Journals (Sweden)
Yue Chen
2016-01-01
Full Text Available By allowing intermediate nodes to encode the received packets before sending them out, network coding improves the capacity and robustness of multicast applications. But it is vulnerable to the pollution attacks. Some signature schemes were proposed to thwart such attacks, but most of them need to be homomorphic that the keys cannot be generated and managed easily. In this paper, we propose a novel fast and secure switch network coding multicast (SSNC on the software defined networks (SDN. In our scheme, the complicated secure multicast management was separated from the fast data transmission based on the SDN. Multiple multicasts will be aggregated to one multicast group according to the requirements of services and the network status. Then, the controller will route aggregated multicast group with network coding; only the trusted switch will be allowed to join the network coding by using broadcast encryption. The proposed scheme can use the traditional cryptography without homomorphy, which greatly reduces the complexity of the computation and improves the efficiency of transmission.
Trellis and turbo coding iterative and graph-based error control coding
Schlegel, Christian B
2015-01-01
This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original. Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework. Advanced research-related developments such as spatial coupling. A focus on algorithmic and implementation aspects of error control coding.
Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.
Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua
2018-03-01
To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Grenier, E; Gehin, C; McAdams, E; Lun, B; Gobin, J-P; Uhl, J-F
2016-03-01
To study of the microcirculatory effects of elastic compression stockings. In phlebology, laser Doppler techniques (flux or imaging) are widely used to investigate cutaneous microcirculation. It is a method used to explore microcirculation by detecting blood flow in skin capillaries. Flux and imaging instruments evaluate, non-invasively in real-time, the perfusion of cutaneous micro vessels. Such tools, well known by the vascular community, are not really suitable to our protocol which requires evaluation through the elastic compression stockings fabric. Therefore, we involve another instrument, called the Hematron (developed by Insa-Lyon, Biomedical Sensor Group, Nanotechnologies Institute of Lyon), to investigate the relationship between skin microcirculatory activities and external compression provided by elastic compression stockings. The Hematron measurement principle is based on the monitoring of the skin's thermal conductivity. This clinical study examined a group of 30 female subjects, aged 42 years ±2 years, who suffer from minor symptoms of chronic venous disease, classified as C0s, and C1s (CEAP). The resulting figures show, subsequent to the pressure exerted by elastic compression stockings, an improvement of microcirculatory activities observed in 83% of the subjects, and a decreased effect was detected in the remaining 17%. Among the total population, the global average increase of the skin's microcirculatory activities is evaluated at 7.63% ± 1.80% (p compression stockings has a direct influence on the skin's microcirculation within this female sample group having minor chronic venous insufficiency signs. Further investigations are required for a deeper understanding of the elastic compression stockings effects on the microcirculatory activity in venous diseases at other stages of pathology. © The Author(s) 2014.
Directory of Open Access Journals (Sweden)
Li Liechen
2016-02-01
Full Text Available A conformal sparse array based on combined Barker code is designed for airship platform. The performance of the designed array such as signal-to-noise ratio is analyzed. Using the hovering characteristics of the airship, interferometry operation can be applied on the real aperture imaging results of two pulses, which can eliminate the random backscatter phase and make the image sparse in the transform domain. Building the relationship between echo and transform coefficients, the Compressed Sensing (CS theory can be introduced to solve the formula and achieving imaging. The image quality of the proposed method can reach the image formed by the full array imaging. The simulation results show the effectiveness of the proposed method.
Probabilistic Decision Based Block Partitioning for Future Video Coding
Wang, Zhao
2017-11-29
In the latest Joint Video Exploration Team development, the quadtree plus binary tree (QTBT) block partitioning structure has been proposed for future video coding. Compared to the traditional quadtree structure of High Efficiency Video Coding (HEVC) standard, QTBT provides more flexible patterns for splitting the blocks, which results in dramatically increased combinations of block partitions and high computational complexity. In view of this, a confidence interval based early termination (CIET) scheme is proposed for QTBT to identify the unnecessary partition modes in the sense of rate-distortion (RD) optimization. In particular, a RD model is established to predict the RD cost of each partition pattern without the full encoding process. Subsequently, the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure by reducing 54.7% encoding time with only 1.12% increase in terms of bit rate. Moreover, the proposed scheme performs consistently well for the high resolution sequences, of which the video coding efficiency is crucial in real applications.
Optical image encryption based on real-valued coding and subtracting with the help of QR code
Deng, Xiaopeng
2015-08-01
A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.
Parallel processing approach to transform-based image coding
Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.
1991-06-01
This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.
Compact Hilbert Curve Index Algorithm Based on Gray Code
Directory of Open Access Journals (Sweden)
CAO Xuefeng
2016-12-01
Full Text Available Hilbert curve has best clustering in various kinds of space filling curves, and has been used as an important tools in discrete global grid spatial index design field. But there are lots of redundancies in the standard Hilbert curve index when the data set has large differences between dimensions. In this paper, the construction features of Hilbert curve is analyzed based on Gray code, and then the compact Hilbert curve index algorithm is put forward, in which the redundancy problem has been avoided while Hilbert curve clustering preserved. Finally, experiment results shows that the compact Hilbert curve index outperforms the standard Hilbert index, their 1 computational complexity is nearly equivalent, but the real data set test shows the coding time and storage space decrease 40%, the speedup ratio of sorting speed is nearly 4.3.
An Infrastructure for UML-Based Code Generation Tools
Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.
The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.
Directory of Open Access Journals (Sweden)
Rolando, A.
2006-09-01
Full Text Available In this paper the compression strength of a clay brickwork masonry bound with cement mortar is analyzed. The target is to obtain the characteristic compression strength of unreinforced brickwork masonry. This research try to test the validity of the analytical equations in European codes, comparing the experimental strength with the analytically obtained from the strength of its components (clay brick and cement mortar.En este artículo se analiza la resistencia a compresión de una fábrica de ladrillo cerámico, asentado con mortero de cemento.El objetivo es obtener la resistencia característica a compresión de la fábrica sin armar.La investigación comprueba la fiabilidad de las expresiones analíticas existentes en la normativa europea, comparando la resistencia obtenida experimentalmente con la obtenida analíticamente, a partir de la resistencia de sus componentes (ladrillo cerámico y mortero de cemento.
Directory of Open Access Journals (Sweden)
A. Schroeder
2012-09-01
Full Text Available This paper proposes a compression of far field matrices in the fast multipole method and its multilevel extension for electromagnetic problems. The compression is based on a spherical harmonic representation of radiation patterns in conjunction with a radiating mode expression of the surface current. The method is applied to study near field effects and the far field of an antenna placed on a ship surface. Furthermore, the electromagnetic scattering of an electrically large plate is investigated. It is demonstrated, that the proposed technique leads to a significant memory saving, making multipole algorithms even more efficient without compromising the accuracy.
On the implicit density based OpenFOAM solver for turbulent compressible flows
Fürst, Jiří
The contribution deals with the development of coupled implicit density based solver for compressible flows in the framework of open source package OpenFOAM. However the standard distribution of OpenFOAM contains several ready-made segregated solvers for compressible flows, the performance of those solvers is rather week in the case of transonic flows. Therefore we extend the work of Shen [15] and we develop an implicit semi-coupled solver. The main flow field variables are updated using lower-upper symmetric Gauss-Seidel method (LU-SGS) whereas the turbulence model variables are updated using implicit Euler method.
A Proposal for Kelly CriterionBased Lossy Network Compression
2016-03-01
detection applications. Most of these applications only send alerts to the central analysis servers. These alerts do not provide the forensic capability...based intrusion detection systems. These systems tend to examine the indi- vidual system’s audit logs looking for intrusive activity. The notable
A method of vehicle license plate recognition based on PCANet and compressive sensing
Ye, Xianyi; Min, Feng
2018-03-01
The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.
Owodunni, Damilola S.; Ali, Anum Z.; Quadeer, Ahmed Abdul; Al-Safadi, Ebrahim B.; Hammi, Oualid; Al-Naffouri, Tareq Y.
2014-01-01
-domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional
Arapov, K.; Bex, G.; Hendriks, R.; Rubingh, E.; Abbel, R.; de With, G.; Friedrich, H.
2016-01-01
This paper describes a combination of photonic annealing and compression rolling to improve the conductive properties of printed binder-based graphene inks. High-density light pulses result in temperatures up to 500 °C that along with a decrease of resistivity lead to layer expansion. The structural
Edge-based compression of cartoon-like images with homogeneous diffusion
DEFF Research Database (Denmark)
Mainberger, Markus; Bruhn, Andrés; Weickert, Joachim
2011-01-01
Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compressio...
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
International Nuclear Information System (INIS)
Yu Yunhan; Xia Yan; Liu Yaqiang; Wang Shi; Ma Tianyu; Chen Jing; Hong Baoyu
2013-01-01
To achieve a maximum compression of system matrix in positron emission tomography (PET) image reconstruction, we proposed a polygonal image pixel division strategy in accordance with rotationally symmetric PET geometry. Geometrical definition and indexing rule for polygonal pixels were established. Image conversion from polygonal pixel structure to conventional rectangular pixel structure was implemented using a conversion matrix. A set of test images were analytically defined in polygonal pixel structure, converted to conventional rectangular pixel based images, and correctly displayed which verified the correctness of the image definition, conversion description and conversion of polygonal pixel structure. A compressed system matrix for PET image recon was generated by tap model and tested by forward-projecting three different distributions of radioactive sources to the sinogram domain and comparing them with theoretical predictions. On a practical small animal PET scanner, a compress ratio of 12.6:1 of the system matrix size was achieved with the polygonal pixel structure, comparing with the conventional rectangular pixel based tap-mode one. OS-EM iterative image reconstruction algorithms with the polygonal and conventional Cartesian pixel grid were developed. A hot rod phantom was detected and reconstructed based on these two grids with reasonable time cost. Image resolution of reconstructed images was both 1.35 mm. We conclude that it is feasible to reconstruct and display images in a polygonal image pixel structure based on a compressed system matrix in PET image reconstruction. (authors)
Four Year-Olds Use Norm-Based Coding for Face Identity
Jeffery, Linda; Read, Ainsley; Rhodes, Gillian
2013-01-01
Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged…
Compressing DNA sequence databases with coil
Directory of Open Access Journals (Sweden)
Hendy Michael D
2008-05-01
Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.
EP-based wavelet coefficient quantization for linear distortion ECG data compression.
Hung, King-Chu; Wu, Tsung-Ching; Lee, Hsieh-Wei; Liu, Tung-Kuan
2014-07-01
Reconstruction quality maintenance is of the essence for ECG data compression due to the desire for diagnosis use. Quantization schemes with non-linear distortion characteristics usually result in time-consuming quality control that blocks real-time application. In this paper, a new wavelet coefficient quantization scheme based on an evolution program (EP) is proposed for wavelet-based ECG data compression. The EP search can create a stationary relationship among the quantization scales of multi-resolution levels. The stationary property implies that multi-level quantization scales can be controlled with a single variable. This hypothesis can lead to a simple design of linear distortion control with 3-D curve fitting technology. In addition, a competitive strategy is applied for alleviating data dependency effect. By using the ECG signals saved in MIT and PTB databases, many experiments were undertaken for the evaluation of compression performance, quality control efficiency, data dependency influence. The experimental results show that the new EP-based quantization scheme can obtain high compression performance and keep linear distortion behavior efficiency. This characteristic guarantees fast quality control even for the prediction model mismatching practical distortion curve. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
A blended pressure/density based method for the computation of incompressible and compressible flows
International Nuclear Information System (INIS)
Rossow, C.-C.
2003-01-01
An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation
Fractal Image Coding Based on a Fitting Surface
Directory of Open Access Journals (Sweden)
Sheng Bi
2014-01-01
Full Text Available A no-search fractal image coding method based on a fitting surface is proposed. In our research, an improved gray-level transform with a fitting surface is introduced. One advantage of this method is that the fitting surface is used for both the range and domain blocks and one set of parameters can be saved. Another advantage is that the fitting surface can approximate the range and domain blocks better than the previous fitting planes; this can result in smaller block matching errors and better decoded image quality. Since the no-search and quadtree techniques are adopted, smaller matching errors also imply less number of blocks matching which results in a faster encoding process. Moreover, by combining all the fitting surfaces, a fitting surface image (FSI is also proposed to speed up the fractal decoding. Experiments show that our proposed method can yield superior performance over the other three methods. Relative to range-averaged image, FSI can provide faster fractal decoding process. Finally, by combining the proposed fractal coding method with JPEG, a hybrid coding method is designed which can provide higher PSNR than JPEG while maintaining the same Bpp.
Impact of Different Spreading Codes Using FEC on DWT Based MC-CDMA System
Masum, Saleh; Kabir, M. Hasnat; Islam, Md. Matiqul; Shams, Rifat Ara; Ullah, Shaikh Enayet
2012-01-01
The effect of different spreading codes in DWT based MC-CDMA wireless communication system is investigated. In this paper, we present the Bit Error Rate (BER) performance of different spreading codes (Walsh-Hadamard code, Orthogonal gold code and Golay complementary sequences) using Forward Error Correction (FEC) of the proposed system. The data is analyzed and is compared among different spreading codes in both coded and uncoded cases. It is found via computer simulation that the performance...
VLSI-based video event triggering for image data compression
Williams, Glenn L.
1994-02-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
Holland, Katharina; Sechopoulos, Ioannis; Mann, Ritse M; den Heeten, Gerard J; van Gils, Carla H; Karssemeijer, Nico
2017-11-28
In mammography, breast compression is applied to reduce the thickness of the breast. While it is widely accepted that firm breast compression is needed to ensure acceptable image quality, guidelines remain vague about how much compression should be applied during mammogram acquisition. A quantitative parameter indicating the desirable amount of compression is not available. Consequently, little is known about the relationship between the amount of breast compression and breast cancer detectability. The purpose of this study is to determine the effect of breast compression pressure in mammography on breast cancer screening outcomes. We used digital image analysis methods to determine breast volume, percent dense volume, and pressure from 132,776 examinations of 57,179 women participating in the Dutch population-based biennial breast cancer screening program. Pressure was estimated by dividing the compression force by the area of the contact surface between breast and compression paddle. The data was subdivided into quintiles of pressure and the number of screen-detected cancers, interval cancers, false positives, and true negatives were determined for each group. Generalized estimating equations were used to account for correlation between examinations of the same woman and for the effect of breast density and volume when estimating sensitivity, specificity, and other performance measures. Sensitivity was computed using interval cancers occurring between two screening rounds and using interval cancers within 12 months after screening. Pair-wise testing for significant differences was performed. Percent dense volume increased with increasing pressure, while breast volume decreased. Sensitivity in quintiles with increasing pressure was 82.0%, 77.1%, 79.8%, 71.1%, and 70.8%. Sensitivity based on interval cancers within 12 months was significantly lower in the highest pressure quintile compared to the third (84.3% vs 93.9%, p = 0.034). Specificity was lower in the
Development and Application of a Code for Internal Exposure (CINEX) based on the CINDY code
International Nuclear Information System (INIS)
Kravchik, T.; Duchan, N.; Sarah, R.; Gabay, Y.; Kol, R.
2004-01-01
Internal exposure to radioactive materials at the NRCN is evaluated using the CINDY (Code for Internal Dosimetry) Package. The code was developed by the Pacific Northwest Laboratory to assist the interpretation of bioassay data, provide bioassay projections and evaluate committed and calendar-year doses from intake or bioassay measurement data. It provides capabilities to calculate organ dose and effective dose equivalents using the International Commission on Radiological Protection (ICRP) 30 approach. The CINDY code operates under DOS operating system and consequently its operation needs a relatively long procedure which also includes a lot of manual typing that can lead to personal human mistakes. A new code has been developed at the NRCN, the CINEX (Code for Internal Exposure), which is an Excel application and leads to a significant reduction in calculation time (in the order of 5-10 times) and in the risk of personal human mistakes. The code uses a database containing tables which were constructed by the CINDY and contain the bioassay values predicted by the ICRP30 model after an intake of an activity unit of each isotope. Using the database, the code than calculates the appropriate intake and consequently the committed effective dose and organ dose. Calculations with the CINEX code were compared to similar calculations with the CINDY code. The discrepancies were less than 5%, which is the rounding error of the CINDY code. Attached is a table which compares parameters calculated with the CINEX and the CINDY codes (for a class Y uranium). The CINEX is now used at the NRCN to calculate occupational intakes and doses to workers with radioactive materials
Local coding based matching kernel method for image classification.
Directory of Open Access Journals (Sweden)
Yan Song
Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.
2013-01-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…
PetriCode: A Tool for Template-Based Code Generation from CPN Models
DEFF Research Database (Denmark)
Simonsen, Kent Inge
2014-01-01
Code generation is an important part of model driven methodologies. In this paper, we present PetriCode, a software tool for generating protocol software from a subclass of Coloured Petri Nets (CPNs). The CPN subclass is comprised of hierarchical CPN models describing a protocol system at different...
Al-Salihi, Hayder Qahtan Kshash; Nakhai, Mohammad Reza
2017-01-01
Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sp...
Nuclear component design ontology building based on ASME codes
International Nuclear Information System (INIS)
Bao Shiyi; Zhou Yu; He Shuyan
2005-01-01
The adoption of ontology analysis in the study of concept knowledge acquisition and representation for the nuclear component design process based on computer-supported cooperative work (CSCW) makes it possible to share and reuse numerous concept knowledge of multi-disciplinary domains. A practical ontology building method is accordingly proposed based on Protege knowledge model in combination with both top-down and bottom-up approaches together with Formal Concept Analysis (FCA). FCA exhibits its advantages in the way it helps establish and improve taxonomic hierarchy of concepts and resolve concept conflict occurred in modeling multi-disciplinary domains. With Protege-3.0 as the ontology building tool, a nuclear component design ontology based ASME codes is developed by utilizing the ontology building method. The ontology serves as the basis to realize concept knowledge sharing and reusing of nuclear component design. (authors)
Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding
Directory of Open Access Journals (Sweden)
Dai Qionghai
2010-01-01
Full Text Available We propose a Stereoscopic Visual Attention- (SVA- based regional bit allocation optimization for Multiview Video Coding (MVC by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over % bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by dB at the cost of insensitive image quality degradation of the background image.
LSB-Based Steganography Using Reflected Gray Code
Chen, Chang-Chu; Chang, Chin-Chen
Steganography aims to hide secret data into an innocuous cover-medium for transmission and to make the attacker cannot recognize the presence of secret data easily. Even the stego-medium is captured by the eavesdropper, the slight distortion is hard to be detected. The LSB-based data hiding is one of the steganographic methods, used to embed the secret data into the least significant bits of the pixel values in a cover image. In this paper, we propose an LSB-based scheme using reflected-Gray code, which can be applied to determine the embedded bit from secret information. Following the transforming rule, the LSBs of stego-image are not always equal to the secret bits and the experiment shows that the differences are up to almost 50%. According to the mathematical deduction and experimental results, the proposed scheme has the same image quality and payload as the simple LSB substitution scheme. In fact, our proposed data hiding scheme in the case of G1 (one bit Gray code) system is equivalent to the simple LSB substitution scheme.
Code cases for implementing risk-based inservice testing in the ASME OM code
Energy Technology Data Exchange (ETDEWEB)
Rowley, C.W.
1996-12-01
Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices.
Code cases for implementing risk-based inservice testing in the ASME OM code
International Nuclear Information System (INIS)
Rowley, C.W.
1996-01-01
Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices
A new chest compression depth feedback algorithm for high-quality CPR based on smartphone.
Song, Yeongtak; Oh, Jaehoon; Chee, Youngjoon
2015-01-01
Although many smartphone application (app) programs provide education and guidance for basic life support, they do not commonly provide feedback on the chest compression depth (CCD) and rate. The validation of its accuracy has not been reported to date. This study was a feasibility assessment of use of the smartphone as a CCD feedback device. In this study, we proposed the concept of a new real-time CCD estimation algorithm using a smartphone and evaluated the accuracy of the algorithm. Using the double integration of the acceleration signal, which was obtained from the accelerometer in the smartphone, we estimated the CCD in real time. Based on its periodicity, we removed the bias error from the accelerometer. To evaluate this instrument's accuracy, we used a potentiometer as the reference depth measurement. The evaluation experiments included three levels of CCD (insufficient, adequate, and excessive) and four types of grasping orientations with various compression directions. We used the difference between the reference measurement and the estimated depth as the error. The error was calculated for each compression. When chest compressions were performed with adequate depth for the patient who was lying on a flat floor, the mean (standard deviation) of the errors was 1.43 (1.00) mm. When the patient was lying on an oblique floor, the mean (standard deviation) of the errors was 3.13 (1.88) mm. The error of the CCD estimation was tolerable for the algorithm to be used in the smartphone-based CCD feedback app to compress more than 51 mm, which is the 2010 American Heart Association guideline.
Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics
Kohira, K.; Masuda, H.
2017-09-01
A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.
Owodunni, Damilola S.
2014-04-01
In this paper, compressed sensing techniques are proposed to linearize commercial power amplifiers driven by orthogonal frequency division multiplexing signals. The nonlinear distortion is considered as a sparse phenomenon in the time-domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional compressed sensing approach, while the second incorporates a priori information about the distortions to enhance the estimation. Finally, the third technique involves an iterative data-aided algorithm that does not require any pilot carriers and hence allows the system to work at maximum bandwidth efficiency. The performances of all the proposed techniques are evaluated on a commercial power amplifier and compared. The error vector magnitude and symbol error rate results show the ability of compressed sensing to compensate for the amplifier\\'s nonlinear distortions. © 2013 Elsevier B.V.
An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier
Directory of Open Access Journals (Sweden)
Xiong Jintao
2016-01-01
Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.
POINT-CLOUD COMPRESSION FOR VEHICLE-BASED MOBILE MAPPING SYSTEMS USING PORTABLE NETWORK GRAPHICS
Directory of Open Access Journals (Sweden)
K. Kohira
2017-09-01
Full Text Available A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects．Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.
The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.
Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2017-01-01
Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright Â© 2016 Elsevier B.V. All rights reserved.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
Real time network traffic monitoring for wireless local area networks based on compressed sensing
Balouchestani, Mohammadreza
2017-05-01
A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN's signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...
Depth Measurement Based on Infrared Coded Structured Light
Directory of Open Access Journals (Sweden)
Tong Jia
2014-01-01
Full Text Available Depth measurement is a challenging problem in computer vision research. In this study, we first design a new grid pattern and develop a sequence coding and decoding algorithm to process the pattern. Second, we propose a linear fitting algorithm to derive the linear relationship between the object depth and pixel shift. Third, we obtain depth information on an object based on this linear relationship. Moreover, 3D reconstruction is implemented based on Delaunay triangulation algorithm. Finally, we utilize the regularity of the error curves to correct the system errors and improve the measurement accuracy. The experimental results show that the accuracy of depth measurement is related to the step length of moving object.
Fire-safety engineering and performance-based codes
DEFF Research Database (Denmark)
Sørensen, Lars Schiøtt
project administrators, etc. The book deals with the following topics: • Historical presentation on the subject of fire • Legislation and building project administration • European fire standardization • Passive and active fire protection • Performance-based Codes • Fire-safety Engineering • Fundamental......Fire-safety Engineering is written as a textbook for Engineering students at universities and other institutions of higher education that teach in the area of fire. The book can also be used as a work of reference for consulting engineers, Building product manufacturers, contractors, building...... thermodynamics • Heat exchange during the fire process • Skin burns • Burning rate, energy release rate and design fires • Proposal to Risk-based design fires • Proposal to a Fire scale • Material ignition and flame spread • Fire dynamics in buildings • Combustion products and toxic gases • Smoke inhalation...
Performance evaluation based on data from code reviews
Andrej, Sekáč
2016-01-01
Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in ...
Directory of Open Access Journals (Sweden)
Ying Chen
2018-03-01
Full Text Available Rate-distortion optimization (RDO plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC. Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity.
Gröbner Bases, Coding, and Cryptography
Sala, Massimiliano; Perret, Ludovic
2009-01-01
Coding theory and cryptography allow secure and reliable data transmission, which is at the heart of modern communication. This book offers a comprehensive overview on the application of commutative algebra to coding theory and cryptography. It analyzes important properties of algebraic/geometric coding systems individually.
Abramoff, M.D.
2006-01-01
Knowledge of the effect of compression of ophthalmic images on diagnostic reading is essential for effective tele-ophthalmology applications. It was therefore with great anticipation that I read the article “The Effect of Compression on Clinical Diagnosis of Glaucoma Based on Non-analyzed Confocal
PHEBUS FP release analysis using a microstructure-based code
International Nuclear Information System (INIS)
Carlucci, L.N.
1992-03-01
The results of pre-test fission-product (FP) release analyses of the first two PHEBUS FP experiments, FPT0 and FPT1, indicate that the FREEDOM microstructure-based code predicts significant differences in both the timing and percent of gaseous FP releases for the two tests. To provide an indication of its predictive capability, FREEDOM was also used to model the high-burnup fuel tested in the Oak Ridge National Laboratory experiments VI-2 and VI-3. For these, the code was found to overpredict releases during the early stages of the tests and to underpredict releases during the later stages. The release kinetics in both tests were reasonably predicted, however. In view of the above, it is likely that the FREEDOM predictions of the final cumulative releases for the first two PHEBUS FP tests are lower-bound estimates. However, the significant difference in the predicted timing of initial releases for the two tests is felt to be indicative of what will occur. Therefore, this difference should be considered in the planning and conduct of the two tests, particularly aspects related to on-line measurements
Toward Wireless Health Monitoring via an Analog Signal Compression-Based Biosensing Platform.
Zhao, Xueyuan; Sadhu, Vidyasagar; Le, Tuan; Pompili, Dario; Javanmard, Mehdi
2018-06-01
Wireless all-analog biosensor design for the concurrent microfluidic and physiological signal monitoring is presented in this paper. The key component is an all-analog circuit capable of compressing two analog sources into one analog signal by the analog joint source-channel coding (AJSCC). Two circuit designs are discussed, including the stacked-voltage-controlled voltage source (VCVS) design with the fixed number of levels, and an improved design, which supports a flexible number of AJSCC levels. Experimental results are presented on the wireless biosensor prototype, composed of printed circuit board realizations of the stacked-VCVS design. Furthermore, circuit simulation and wireless link simulation results are presented on the improved design. Results indicate that the proposed wireless biosensor is well suited for sensing two biological signals simultaneously with high accuracy, and can be applied to a wide variety of low-power and low-cost wireless continuous health monitoring applications.
Spike-based population coding and working memory.
Directory of Open Access Journals (Sweden)
Martin Boerlin
2011-02-01
Full Text Available Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.
Fusion of Thresholding Rules During Wavelet-Based Noisy Image Compression
Directory of Open Access Journals (Sweden)
Bekhtin Yury
2016-01-01
Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.
Task-oriented lossy compression of magnetic resonance images
Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques
1996-04-01
A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.
Hot-compress: A new postdeposition treatment for ZnO-based flexible dye-sensitized solar cells
Energy Technology Data Exchange (ETDEWEB)
Haque Choudhury, Mohammad Shamimul, E-mail: shamimul129@gmail.com [Department of Frontier Material, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Aichi 466-8555 (Japan); Department of Electrical and Electronic Engineering, International Islamic University Chittagong, b154/a, College Road, Chittagong 4203 (Bangladesh); Kishi, Naoki; Soga, Tetsuo [Department of Frontier Material, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Aichi 466-8555 (Japan)
2016-08-15
Highlights: • A new postdeposition treatment named hot-compress is introduced. • Hot-compression gives homogeneous compact layer ZnO photoanode. • I-V and EIS analysis data confirms the efficacy of this method. • Charge transport resistance was reduced by the application of hot-compression. - Abstract: This article introduces a new postdeposition treatment named hot-compress for flexible zinc oxide–base dye-sensitized solar cells. This postdeposition treatment includes the application of compression pressure at an elevated temperature. The optimum compression pressure of 130 Ma at an optimum compression temperature of 70 °C heating gives better photovoltaic performance compared to the conventional cells. The aptness of this method was confirmed by investigating scanning electron microscopy image, X-ray diffraction, current-voltage and electrochemical impedance spectroscopy analysis of the prepared cells. Proper heating during compression lowers the charge transport resistance, longer the electron lifetime of the device. As a result, the overall power conversion efficiency of the device was improved about 45% compared to the conventional room temperature compressed cell.
A method of loss free compression for the data of nuclear spectrum
International Nuclear Information System (INIS)
Sun Mingshan; Wu Shiying; Chen Yantao; Xu Zurun
2000-01-01
A new method of loss free compression based on the feature of the data of nuclear spectrum is provided, from which a practicable algorithm is successfully derived. A compression rate varying from 0.50 to 0.25 is obtained and the distribution of the processed data becomes even more suitable to be reprocessed by another compression such as Huffman Code to improve the compression rate
Leturiondo, Mikel; Ruiz de Gauna, Sofía; Ruiz, Jesus M; Julio Gutiérrez, J; Leturiondo, Luis A; González-Otero, Digna M; Russell, James K; Zive, Dana; Daya, Mohamud
2018-03-01
Capnography has been proposed as a method for monitoring the ventilation rate during cardiopulmonary resuscitation (CPR). A high incidence (above 70%) of capnograms distorted by chest compression induced oscillations has been previously reported in out-of-hospital (OOH) CPR. The aim of the study was to better characterize the chest compression artefact and to evaluate its influence on the performance of a capnogram-based ventilation detector during OOH CPR. Data from the MRx monitor-defibrillator were extracted from OOH cardiac arrest episodes. For each episode, presence of chest compression artefact was annotated in the capnogram. Concurrent compression depth and transthoracic impedance signals were used to identify chest compressions and to annotate ventilations, respectively. We designed a capnogram-based ventilation detection algorithm and tested its performance with clean and distorted episodes. Data were collected from 232 episodes comprising 52 654 ventilations, with a mean (±SD) of 227 (±118) per episode. Overall, 42% of the capnograms were distorted. Presence of chest compression artefact degraded algorithm performance in terms of ventilation detection, estimation of ventilation rate, and the ability to detect hyperventilation. Capnogram-based ventilation detection during CPR using our algorithm was compromised by the presence of chest compression artefact. In particular, artefact spanning from the plateau to the baseline strongly degraded ventilation detection, and caused a high number of false hyperventilation alarms. Further research is needed to reduce the impact of chest compression artefact on capnographic ventilation monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.
Development of multi-physics code systems based on the reactor dynamics code DYN3D
Energy Technology Data Exchange (ETDEWEB)
Kliem, Soeren; Gommlich, Andre; Grahn, Alexander; Rohde, Ulrich [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany); Schuetze, Jochen [ANSYS Germany GmbH, Darmstadt (Germany); Frank, Thomas [ANSYS Germany GmbH, Otterfing (Germany); Gomez Torres, Armando M.; Sanchez Espinoza, Victor Hugo [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany)
2011-07-15
The reactor dynamics code DYN3D has been coupled with the CFD code ANSYS CFX and the 3D thermal hydraulic core model FLICA4. In the coupling with ANSYS CFX, DYN3D calculates the neutron kinetics and the fuel behavior including the heat transfer to the coolant. The physical data interface between the codes is the volumetric heat release rate into the coolant. In the coupling with FLICA4 only the neutron kinetics module of DYN3D is used. Fluid dynamics and related transport phenomena in the reactor's coolant and fuel behavior is calculated by FLICA4. The correctness of the coupling of DYN3D with both thermal hydraulic codes was verified by the calculation of different test problems. These test problems were set-up in such a way that comparison with the DYN3D stand-alone code was possible. This included steady-state and transient calculations of a mini-core consisting of nine real-size PWR fuel assemblies with ANSYS CFX/DYN3D as well as mini-core and a full core steady-state calculation using FLICA4/DYN3D. (orig.)
Development of multi-physics code systems based on the reactor dynamics code DYN3D
International Nuclear Information System (INIS)
Kliem, Soeren; Gommlich, Andre; Grahn, Alexander; Rohde, Ulrich; Schuetze, Jochen; Frank, Thomas; Gomez Torres, Armando M.; Sanchez Espinoza, Victor Hugo
2011-01-01
The reactor dynamics code DYN3D has been coupled with the CFD code ANSYS CFX and the 3D thermal hydraulic core model FLICA4. In the coupling with ANSYS CFX, DYN3D calculates the neutron kinetics and the fuel behavior including the heat transfer to the coolant. The physical data interface between the codes is the volumetric heat release rate into the coolant. In the coupling with FLICA4 only the neutron kinetics module of DYN3D is used. Fluid dynamics and related transport phenomena in the reactor's coolant and fuel behavior is calculated by FLICA4. The correctness of the coupling of DYN3D with both thermal hydraulic codes was verified by the calculation of different test problems. These test problems were set-up in such a way that comparison with the DYN3D stand-alone code was possible. This included steady-state and transient calculations of a mini-core consisting of nine real-size PWR fuel assemblies with ANSYS CFX/DYN3D as well as mini-core and a full core steady-state calculation using FLICA4/DYN3D. (orig.)
R.N. Mekuria (Rufael); M. Sanna (Michele); E Izquierdo (Ebroul); D.C.A. Bulterman (Dick); P.S. Cesar Garcia (Pablo Santiago)
2014-01-01
htmlabstract3D Tele-immersion enables participants in remote locations to share, in real-time, an activity. It offers users interactive and immersive experiences, but it challenges current media streaming solutions. Work in the past has mainly focused on the efficient delivery of image-based 3D
Mekuria, R.; Sanna, M.; Izquierdo, E; Bulterman, D.; Garcia, P.
2014-01-01
3-D tele-immersion (3DTI) enables participants in remote locations to share, in real time, an activity. It offers users interactive and immersive experiences, but it challenges current media-streaming solutions. Work in the past has mainly focused on the efficient delivery of image-based 3-D videos
A New Approach for Fingerprint Image Compression
Energy Technology Data Exchange (ETDEWEB)
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.
Ma, Lihong; Jin, Weimin
2018-01-01
A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.
Directory of Open Access Journals (Sweden)
J. Obedt Figueroa-Cavazos
2016-01-01
Full Text Available This work explores the viability of 3D printed intervertebral lumbar cages based on biocompatible polycarbonate (PC-ISO® material. Several design concepts are proposed for the generation of patient-specific intervertebral lumbar cages. The 3D printed material achieved compressive yield strength of 55 MPa under a specific combination of manufacturing parameters. The literature recommends a reference load of 4,000 N for design of intervertebral lumbar cages. Under compression testing conditions, the proposed design concepts withstand between 7,500 and 10,000 N of load before showing yielding. Although some stress concentration regions were found during analysis, the overall viability of the proposed design concepts was validated.
Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur
2009-05-01
Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.
Directory of Open Access Journals (Sweden)
X.Z. Jiang
2014-07-01
Full Text Available Over the past few decades, wireless sensor networks have been widely used in the field of structure health monitoring of civil, mechanical, and aerospace systems. Currently, most wireless sensor networks are battery-powered and it is costly and unsustainable for maintenance because of the requirement for frequent battery replacements. As an attempt to address such issue, this article theoretically and experimentally studies a compression-based piezoelectric energy harvester using a multilayer stack configuration, which is suitable for civil infrastructure system applications where large compressive loads occur, such as heavily vehicular loading acting on pavements. In this article, we firstly present analytical and numerical modeling of the piezoelectric multilayer stack under axial compressive loading, which is based on the linear theory of piezoelectricity. A two-degree-of-freedom electromechanical model, considering both the mechanical and electrical aspects of the proposed harvester, was developed to characterize the harvested electrical power under the external electrical load. Exact closed-form expressions of the electromechanical models have been derived to analyze the mechanical and electrical properties of the proposed harvester. The theoretical analyses are validated through several experiments for a test prototype under harmonic excitations. The test results exhibit very good agreement with the analytical analyses and numerical simulations for a range of resistive loads and input excitation levels.
An Adaptive Joint Sparsity Recovery for Compressive Sensing Based EEG System
Directory of Open Access Journals (Sweden)
Hamza Djelouat
2017-01-01
Full Text Available The last decade has witnessed tremendous efforts to shape the Internet of things (IoT platforms to be well suited for healthcare applications. These platforms are comprised of a network of wireless sensors to monitor several physical and physiological quantities. For instance, long-term monitoring of brain activities using wearable electroencephalogram (EEG sensors is widely exploited in the clinical diagnosis of epileptic seizures and sleeping disorders. However, the deployment of such platforms is challenged by the high power consumption and system complexity. Energy efficiency can be achieved by exploring efficient compression techniques such as compressive sensing (CS. CS is an emerging theory that enables a compressed acquisition using well-designed sensing matrices. Moreover, system complexity can be optimized by using hardware friendly structured sensing matrices. This paper quantifies the performance of a CS-based multichannel EEG monitoring. In addition, the paper exploits the joint sparsity of multichannel EEG using subspace pursuit (SP algorithm as well as a designed sparsifying basis in order to improve the reconstruction quality. Furthermore, the paper proposes a modification to the SP algorithm based on an adaptive selection approach to further improve the performance in terms of reconstruction quality, execution time, and the robustness of the recovery process.
DETERMINING OPTIMAL CUBE FOR 3D-DCT BASED VIDEO COMPRESSION FOR DIFFERENT MOTION LEVELS
Directory of Open Access Journals (Sweden)
J. Augustin Jacob
2012-11-01
Full Text Available This paper proposes new three dimensional discrete cosine transform (3D-DCT based video compression algorithm that will select the optimal cube size based on the motion content of the video sequence. It is determined by finding normalized pixel difference (NPD values, and by categorizing the cubes as “low” or “high” motion cube suitable cube size of dimension either [16×16×8] or[8×8×8] is chosen instead of fixed cube algorithm. To evaluate the performance of the proposed algorithm test sequence with different motion levels are chosen. By doing rate vs. distortion analysis the level of compression that can be achieved and the quality of reconstructed video sequence are determined and compared against fixed cube size algorithm. Peak signal to noise ratio (PSNR is taken to measure the video quality. Experimental result shows that varying the cube size with reference to the motion content of video frames gives better performance in terms of compression ratio and video quality.
Toward topology-based characterization of small-scale mixing in compressible turbulence
Suman, Sawan; Girimaji, Sharath
2011-11-01
Turbulent mixing rate at small scales of motion (molecular mixing) is governed by the steepness of the scalar-gradient field which in turn is dependent upon the prevailing velocity gradients. Thus motivated, we propose a velocity-gradient topology-based approach for characterizing small-scale mixing in compressible turbulence. We define a mixing efficiency metric that is dependent upon the topology of the solenoidal and dilatational deformation rates of a fluid element. The mixing characteristics of solenoidal and dilatational velocity fluctuations are clearly delineated. We validate this new approach by employing mixing data from direct numerical simulations (DNS) of compressible decaying turbulence with passive scalar. For each velocity-gradient topology, we compare the mixing efficiency predicted by the topology-based model with the corresponding conditional scalar variance obtained from DNS. The new mixing metric accurately distinguishes good and poor mixing topologies and indeed reasonably captures the numerical values. The results clearly demonstrate the viability of the proposed approach for characterizing and predicting mixing in compressible flows.
Directory of Open Access Journals (Sweden)
Sheng Shen
2018-04-01
Full Text Available The accuracy of underwater acoustic targets recognition via limited ship radiated noise can be improved by a deep neural network trained with a large number of unlabeled samples. However, redundant features learned by deep neural network have negative effects on recognition accuracy and efficiency. A compressed deep competitive network is proposed to learn and extract features from ship radiated noise. The core idea of the algorithm includes: (1 Competitive learning: By integrating competitive learning into the restricted Boltzmann machine learning algorithm, the hidden units could share the weights in each predefined group; (2 Network pruning: The pruning based on mutual information is deployed to remove the redundant parameters and further compress the network. Experiments based on real ship radiated noise show that the network can increase recognition accuracy with fewer informative features. The compressed deep competitive network can achieve a classification accuracy of 89.1 % , which is 5.3 % higher than deep competitive network and 13.1 % higher than the state-of-the-art signal processing feature extraction methods.
Evaluation of mammogram compression efficiency
International Nuclear Information System (INIS)
Przelaskowski, A.; Surowski, P.; Kukula, A.
2005-01-01
Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the
Highly Efficient Compression Algorithms for Multichannel EEG.
Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda
2018-05-01
The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.
Design of ACM system based on non-greedy punctured LDPC codes
Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng
2017-08-01
In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.
Data compression of scanned halftone images
DEFF Research Database (Denmark)
Forchhammer, Søren; Jensen, Kim S.
1994-01-01
with the halftone grid, and converted to a gray level representation. A new digital description of (halftone) grids has been developed for this purpose. The gray level values are coded according to a scheme based on states derived from a segmentation of gray values. To enable real-time processing of high resolution...... scanner output, the coding has been parallelized and implemented on a transputer system. For comparison, the test image was coded using existing (lossless) methods giving compression rates of 2-7. The best of these, a combination of predictive and binary arithmetic coding was modified and optimized...
Perceptual distortion analysis of color image VQ-based coding
Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine
1997-04-01
It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.
Improvement of QR Code Recognition Based on Pillbox Filter Analysis
Directory of Open Access Journals (Sweden)
Jia-Shing Sheu
2013-04-01
Full Text Available The objective of this paper is to perform the innovation design for improving the recognition of a captured QR code image with blur through the Pillbox filter analysis. QR code images can be captured by digital video cameras. Many factors contribute to QR code decoding failure, such as the low quality of the image. Focus is an important factor that affects the quality of the image. This study discusses the out-of-focus QR code image and aims to improve the recognition of the contents in the QR code image. Many studies have used the pillbox filter (circular averaging filter method to simulate an out-of-focus image. This method is also used in this investigation to improve the recognition of a captured QR code image. A blurred QR code image is separated into nine levels. In the experiment, four different quantitative approaches are used to reconstruct and decode an out-of-focus QR code image. These nine reconstructed QR code images using methods are then compared. The final experimental results indicate improvements in identification.
International Nuclear Information System (INIS)
Vlasik, K.F.; Grachev, V.M.; Dmitrenko, V.V.; Sokolov, D.V.; Ulin, S.E.; Uteshev, Z.M.
2000-01-01
Paper describes an algorithm to detect and to identify radionuclides on the basis of γ-spectra derived using a compressed xenon base γ-spectrometer. The algorithm is based on the comparison of the measured γ-spectra with table data on radionuclides. One formulated criteria of comparison. One elaborated a package of programs realizing the algorithm and ensuring implementation of the comprehensive process of γ-spectra processing. The algorithm was evaluated using real spectra. Its applicability and efficiency are demonstrated [ru
Directory of Open Access Journals (Sweden)
Noor D. N.
2016-01-01
Full Text Available Conventional air conditioners or vapour compression systems are main contributors to energy consumption in modern buildings. There are common environmental issues emanating from vapour compression system such as greenhouse gas emission and heat wastage. These problems can be reduced by adaptation of solar energy components to vapour compression system. However, intermittence input of daily solar radiation was the main issue of solar energy system. This paper presents the recent studies on hybrid air conditioning system. In addition, the basic vapour compression system and components involved in the solar air conditioning system are discussed. Introduction of low temperature storage can be an interactive solution and improved economically which portray different modes of operating strategies. Yet, very few studies have examined on optimal operating strategies of the hybrid system. Finally, the findings of this review will help suggest optimization of solar absorption and vapour compression based hybrid air conditioning system for future work while considering both economic and environmental factors.
Web-based tool for subjective observer ranking of compressed medical images
Langer, Steven G.; Stewart, Brent K.; Andrew, Rex K.
1999-05-01
In the course of evaluating various compression schemes for ultrasound teleradiology applications, it became obvious that paper based methods of data collection were time consuming and error prone. A method was sought which allowed participating radiologists to view the ultrasound video clips (compressed to varying degree) at their desks. Furthermore, the method should allow observers to enter their evaluations and when finished, automatically submit the data to our statistical analysis engine. We have found the World Wide Web offered a ready solution. A web page was constructed that contains 18 embedded AVI video clips. The 18 clips represent 6 distinct anatomical areas, compressed by various methods and amounts, and then randomly distributed through the web page. To the right of each video, a series of questions are presented which ask the observer to rank (1 - 5) his/her ability to answer diagnostically relevant questions. When completed, the observer presses 'Submit' and a file of tab delimited test is created which can then be imported to an Excel workbook. Kappa analysis is then performed and the resulting plots demonstrate observer preferences.
A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients
Directory of Open Access Journals (Sweden)
Lei Yu
2016-02-01
Full Text Available Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1 they are susceptible to subjective factors; (2 they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information.
A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients
Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping
2016-01-01
Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information. PMID:26861337
Quark enables semi-reference-based compression of RNA-seq data.
Sarkar, Hirak; Patro, Rob
2017-11-01
The past decade has seen an exponential increase in biological sequencing capacity, and there has been a simultaneous effort to help organize and archive some of the vast quantities of sequencing data that are being generated. Although these developments are tremendous from the perspective of maximizing the scientific utility of available data, they come with heavy costs. The storage and transmission of such vast amounts of sequencing data is expensive. We present Quark, a semi-reference-based compression tool designed for RNA-seq data. Quark makes use of a reference sequence when encoding reads, but produces a representation that can be decoded independently, without the need for a reference. This allows Quark to achieve markedly better compression rates than existing reference-free schemes, while still relieving the burden of assuming a specific, shared reference sequence between the encoder and decoder. We demonstrate that Quark achieves state-of-the-art compression rates, and that, typically, only a small fraction of the reference sequence must be encoded along with the reads to allow reference-free decompression. Quark is implemented in C ++11, and is available under a GPLv3 license at www.github.com/COMBINE-lab/quark. rob.patro@cs.stonybrook.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Development of System Based Code: Case Study of Life-Cycle Margin Evaluation
International Nuclear Information System (INIS)
Tai Asayama; Masaki Morishita; Masanori Tashimo
2006-01-01
For a leap of progress in structural deign of nuclear plant components, The late Professor Emeritus Yasuhide Asada proposed the System Based Code. The key concepts of the System Based Code are; (1) life-cycle margin optimization, (2) expansion of technical options as well as combinations of technical options beyond the current codes and standards, and (3) designing to clearly defined target reliabilities. Those concepts are very new to most of the nuclear power plant designers who are naturally obliged to design to current codes and standards; the application of the concepts of the System Based Code to design will lead to entire change of practices that designers have long been accustomed to. On the other hand, experienced designers are supposed to have expertise that can support and accelerate the development of the System Based Code. Therefore, interfacing with experienced designers is of crucial importance for the development of the System Based Code. The authors conducted a survey on the acceptability of the System Based Code concept. The results were analyzed from the possibility of improving structural design both in terms of reliability and cost effectiveness by the introduction of the System Based Code concept. It was concluded that the System Based Code is beneficial for those purposes. Also described is the expertise elicited from the results of the survey that can be reflected to the development of the System Based Code. (authors)
Survey Of Lossless Image Coding Techniques
Melnychuck, Paul W.; Rabbani, Majid
1989-04-01
Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.
Context adaptive coding of bi-level images
DEFF Research Database (Denmark)
Forchhammer, Søren
2008-01-01
With the advent of sequential arithmetic coding, the focus of highly efficient lossless data compression is placed on modelling the data. Rissanen's Algorithm Context provided an elegant solution to universal coding with optimal convergence rate. Context based arithmetic coding laid the grounds f...
Calibration Methods for Reliability-Based Design Codes
DEFF Research Database (Denmark)
Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard
2004-01-01
The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...
Preserving Envelope Efficiency in Performance Based Code Compliance
Energy Technology Data Exchange (ETDEWEB)
Thornton, Brian A. [Thornton Energy Consulting (United States); Sullivan, Greg P. [Efficiency Solutions (United States); Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Baechler, Michael C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-06-20
The City of Seattle 2012 Energy Code (Seattle 2014), one of the most progressive in the country, is under revision for its 2015 edition. Additionally, city personnel participate in the development of the next generation of the Washington State Energy Code and the International Energy Code. Seattle has pledged carbon neutrality by 2050 including buildings, transportation and other sectors. The United States Department of Energy (DOE), through Pacific Northwest National Laboratory (PNNL) provided technical assistance to Seattle in order to understand the implications of one potential direction for its code development, limiting trade-offs of long-lived building envelope components less stringent than the prescriptive code envelope requirements by using better-than-code but shorter-lived lighting and heating, ventilation, and air-conditioning (HVAC) components through the total building performance modeled energy compliance path. Weaker building envelopes can permanently limit building energy performance even as lighting and HVAC components are upgraded over time, because retrofitting the envelope is less likely and more expensive. Weaker building envelopes may also increase the required size, cost and complexity of HVAC systems and may adversely affect occupant comfort. This report presents the results of this technical assistance. The use of modeled energy code compliance to trade-off envelope components with shorter-lived building components is not unique to Seattle and the lessons and possible solutions described in this report have implications for other jurisdictions and energy codes.
Directory of Open Access Journals (Sweden)
Bailly O.
2006-12-01
Full Text Available A CFD code, using a mixed finite volumes - finite elements method on tetraedrons, is now available for engine simulations. The code takes into account the displacement of moving walls such as piston and valves in a full automatic way: a single mesh is used for a full computation and no intervention of the user is necessary. A fourth order implicit spatial scheme and a first order implicit temporal scheme are used. The work presented in this paper is part of a larger program for the validation of this new numerical tool for engine applications. Here, comparisons between computation and experiments of the intake and compression strokes of a four-valve engine were carried out. The experimental investigations are conducted on a single cylinder four valve optical research engine. The turbulence intensity, mean velocity components, tumble and swirl ratios in the combustion chamber are deduced from the LDV measurements. The comparisons between computations and experiments are made on the mean velocity flow field at different locations inside the chamber and for different crank angles. We also present some global comparisons (swirl and tumble ratios. The simulation shows excellent agreement between computations and experiments. Un code de calcul utilisant une approche mixte éléments finis - volumes finis en tétraèdres a été développé pour les simulations moteur. Le code prend en compte le déplacement des parois mobiles comme les pistons et les soupapes de façon totalement automatique : un maillage unique est utilisé pour tout le calcul sans intervention de l'utilisateur. Un schéma implicite du quatrième ordre en espace et du premier ordre en temps est retenu. Le travail présenté dans cet article fait partie d'une démarche globale de validation de cette nouvelle approche pour les moteurs. Des comparaisons entre calculs et mesures lors des phases d'admission et de compression dans un moteur 4-soupapes AC y sont exposées. Ces exp
Efficacy of Code Optimization on Cache-based Processors
VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important
Adaptable recursive binary entropy coding technique
Kiely, Aaron B.; Klimesh, Matthew A.
2002-07-01
We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.
Plato: A localised orbital based density functional theory code
Kenny, S. D.; Horsfield, A. P.
2009-12-01
The Plato package allows both orthogonal and non-orthogonal tight-binding as well as density functional theory (DFT) calculations to be performed within a single framework. The package also provides extensive tools for analysing the results of simulations as well as a number of tools for creating input files. The code is based upon the ideas first discussed in Sankey and Niklewski (1989) [1] with extensions to allow high-quality DFT calculations to be performed. DFT calculations can utilise either the local density approximation or the generalised gradient approximation. Basis sets from minimal basis through to ones containing multiple radial functions per angular momenta and polarisation functions can be used. Illustrations of how the package has been employed are given along with instructions for its utilisation. Program summaryProgram title: Plato Catalogue identifier: AEFC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 219 974 No. of bytes in distributed program, including test data, etc.: 1 821 493 Distribution format: tar.gz Programming language: C/MPI and PERL Computer: Apple Macintosh, PC, Unix machines Operating system: Unix, Linux and Mac OS X Has the code been vectorised or parallelised?: Yes, up to 256 processors tested RAM: Up to 2 Gbytes per processor Classification: 7.3 External routines: LAPACK, BLAS and optionally ScaLAPACK, BLACS, PBLAS, FFTW Nature of problem: Density functional theory study of electronic structure and total energies of molecules, crystals and surfaces. Solution method: Localised orbital based density functional theory. Restrictions: Tight-binding and density functional theory only, no exact exchange. Unusual features: Both atom centred and uniform meshes available
Non-Binary Protograph-Based LDPC Codes: Analysis,Enumerators and Designs
Sun, Yizeng
2013-01-01
Non-binary LDPC codes can outperform binary LDPC codes using sum-product algorithm with higher computation complexity. Non-binary LDPC codes based on protographs have the advantage of simple hardware architecture. In the first part of this thesis, we will use EXIT chart analysis to compute the thresholds of different protographs over GF(q). Based on threshold computation, some non-binary protograph-based LDPC codes are designed and their frame error rates are compared with binary LDPC codes. ...
A novel construction method of QC-LDPC codes based on CRT for optical communications
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-05-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.
Short-Block Protograph-Based LDPC Codes
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher
2010-01-01
Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.
The Effect of Alkaline Activator Ratio on the Compressive Strength of Fly Ash-Based Geopolymer Paste
Lăzărescu, A. V.; Szilagyi, H.; Baeră, C.; Ioani, A.
2017-06-01
Alkaline activation of fly ash is a particular procedure in which ash resulting from a power plant combined with a specific alkaline activator creates a solid material when dried at a certain temperature. In order to obtain desirable compressive strengths, the mix design of fly ash based geopolymer pastes should be explored comprehensively. To determine the preliminary compressive strength for fly ash based geopolymer paste using Romanian material source, various ratios of Na2SiO3 solution/ NaOH solution were produced, keeping the fly ash/alkaline activator ratio constant. All the mixes were then cured at 70 °C for 24 hours and tested at 2 and 7 days, respectively. The aim of this paper is to present the preliminary compressive strength results for producing fly ash based geopolymer paste using Romanian material sources, the effect of alkaline activators ratio on the compressive strength and studying the directions for future research.
Supervised Learning Based on Temporal Coding in Spiking Neural Networks.
Mostafa, Hesham
2017-08-01
Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.
Smoothing-Based Relative Navigation and Coded Aperture Imaging
Saenz-Otero, Alvar; Liebe, Carl Christian; Hunter, Roger C.; Baker, Christopher
2017-01-01
This project will develop an efficient smoothing software for incremental estimation of the relative poses and velocities between multiple, small spacecraft in a formation, and a small, long range depth sensor based on coded aperture imaging that is capable of identifying other spacecraft in the formation. The smoothing algorithm will obtain the maximum a posteriori estimate of the relative poses between the spacecraft by using all available sensor information in the spacecraft formation.This algorithm will be portable between different satellite platforms that possess different sensor suites and computational capabilities, and will be adaptable in the case that one or more satellites in the formation become inoperable. It will obtain a solution that will approach an exact solution, as opposed to one with linearization approximation that is typical of filtering algorithms. Thus, the algorithms developed and demonstrated as part of this program will enhance the applicability of small spacecraft to multi-platform operations, such as precisely aligned constellations and fractionated satellite systems.
Mining compressing sequential problems
Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.
2012-01-01
Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and
Supporting Situated Learning Based on QR Codes with Etiquetar App: A Pilot Study
Camacho, Miguel Olmedo; Pérez-Sanagustín, Mar; Alario-Hoyos, Carlos; Soldani, Xavier; Kloos, Carlos Delgado; Sayago, Sergio
2014-01-01
EtiquetAR is an authoring tool for supporting the design and enactment of situated learning experiences based on QR tags. Practitioners use etiquetAR for creating, managing and personalizing collections of QR codes with special properties: (1) codes can have more than one link pointing at different multimedia resources, (2) codes can be updated…
Directory of Open Access Journals (Sweden)
Manzini Giovanni
2007-07-01
Full Text Available Abstract Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity, NCD (Normalized Compression Dissimilarity and CD (Compression Dissimilarity. Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC
Ferragina, Paolo; Giancarlo, Raffaele; Greco, Valentina; Manzini, Giovanni; Valiente, Gabriel
2007-07-13
Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity), NCD (Normalized Compression Dissimilarity) and CD (Compression Dissimilarity). Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis, aims at
A review on compressed pattern matching
Directory of Open Access Journals (Sweden)
Surya Prakash Mishra
2016-09-01
Full Text Available Compressed pattern matching (CPM refers to the task of locating all the occurrences of a pattern (or set of patterns inside the body of compressed text. In this type of matching, pattern may or may not be compressed. CPM is very useful in handling large volume of data especially over the network. It has many applications in computational biology, where it is useful in finding similar trends in DNA sequences; intrusion detection over the networks, big data analytics etc. Various solutions have been provided by researchers where pattern is matched directly over the uncompressed text. Such solution requires lot of space and consumes lot of time when handling the big data. Various researchers have proposed the efficient solutions for compression but very few exist for pattern matching over the compressed text. Considering the future trend where data size is increasing exponentially day-by-day, CPM has become a desirable task. This paper presents a critical review on the recent techniques on the compressed pattern matching. The covered techniques includes: Word based Huffman codes, Word Based Tagged Codes; Wavelet Tree Based Indexing. We have presented a comparative analysis of all the techniques mentioned above and highlighted their advantages and disadvantages.
Bulk and microscale compressive behavior of a Zr-based metallic glass
International Nuclear Information System (INIS)
Lai, Y.H.; Lee, C.J.; Cheng, Y.T.; Chou, H.S.; Chen, H.M.; Du, X.H.; Chang, C.I.; Huang, J.C.; Jian, S.R.; Jang, J.S.C.; Nieh, T.G.
2008-01-01
Micropillars with diameters of 3.8, 1 and 0.7 μm were fabricated from a two-phase Zr-based metallic glass using focus ion beam (FIB), and then tested in compression at strain rates from 1 x 10 -4 to 1 x 10 -2 s -1 . The apparent yield strength of the micropillars ranges from 1992 to 2972 MPa, or 25-86% increase over that of the bulk specimens. This strength increase can be rationalized by the Weibull statistics for brittle materials
Directory of Open Access Journals (Sweden)
Ling Yongfa
2016-01-01
Full Text Available The paper proposes a mobile control sink node data collection method in the wireless sensor network based on compressive sensing. This method, with regular track, selects the optimal data collection points in the monitoring area via the disc method, calcu-lates the shortest path by using the quantum genetic algorithm, and hence determines the data collection route. Simulation results show that this method has higher network throughput and better energy efficiency, capable of collecting a huge amount of data with balanced energy consumption in the network.
pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data
Directory of Open Access Journals (Sweden)
Ardita Shkurti
2016-01-01
Full Text Available The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced written in Python.
Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation
International Nuclear Information System (INIS)
Takeda, Koujin; Kabashima, Yoshiyuki
2013-01-01
We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach
DESIGN AND IMPLEMENTATION OF A VHDL PROCESSOR FOR DCT BASED IMAGE COMPRESSION
Directory of Open Access Journals (Sweden)
Md. Shabiul Islam
2017-11-01
Full Text Available This paper describes the design and implementation of a VHDL processor meant for performing 2D-Discrete Cosine Transform (DCT to use in image compression applications. The design flow starts from the system specification to implementation on silicon and the entire process is carried out using an advanced workstation based design environment for digital signal processing. The software allows the bit-true analysis to ensure that the designed VLSI processor satisfies the required specifications. The bit-true analysis is performed on all levels of abstraction (behavior, VHDL etc.. The motivation behind the work is smaller size chip area, faster processing, reducing the cost of the chip
OpenCL-based vicinity computation for 3D multiresolution mesh compression
Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri
2017-03-01
3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.
Fault Diagnosis for Hydraulic Servo System Using Compressed Random Subspace Based ReliefF
Directory of Open Access Journals (Sweden)
Yu Ding
2018-01-01
Full Text Available Playing an important role in electromechanical systems, hydraulic servo system is crucial to mechanical systems like engineering machinery, metallurgical machinery, ships, and other equipment. Fault diagnosis based on monitoring and sensory signals plays an important role in avoiding catastrophic accidents and enormous economic losses. This study presents a fault diagnosis scheme for hydraulic servo system using compressed random subspace based ReliefF (CRSR method. From the point of view of feature selection, the scheme utilizes CRSR method to determine the most stable feature combination that contains the most adequate information simultaneously. Based on the feature selection structure of ReliefF, CRSR employs feature integration rules in the compressed domain. Meanwhile, CRSR substitutes information entropy and fuzzy membership for traditional distance measurement index. The proposed CRSR method is able to enhance the robustness of the feature information against interference while selecting the feature combination with balanced information expressing ability. To demonstrate the effectiveness of the proposed CRSR method, a hydraulic servo system joint simulation model is constructed by HyPneu and Simulink, and three fault modes are injected to generate the validation data.
Energy Technology Data Exchange (ETDEWEB)
AlAfeef, Ala, E-mail: a.al-afeef.1@research.gla.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); School of Computing Science, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Bobynko, Joanna [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Cockshott, W. Paul. [School of Computing Science, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Craven, Alan J. [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Zuazo, Ian; Barges, Patrick [ArcelorMittal Maizières Research, Maizières-lès-Metz 57283 (France); MacLaren, Ian, E-mail: ian.maclaren@glasgow.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom)
2016-11-15
We have investigated the use of DualEELS in elementally sensitive tilt series tomography in the scanning transmission electron microscope. A procedure is implemented using deconvolution to remove the effects of multiple scattering, followed by normalisation by the zero loss peak intensity. This is performed to produce a signal that is linearly dependent on the projected density of the element in each pixel. This method is compared with one that does not include deconvolution (although normalisation by the zero loss peak intensity is still performed). Additionally, we compare the 3D reconstruction using a new compressed sensing algorithm, DLET, with the well-established SIRT algorithm. VC precipitates, which are extracted from a steel on a carbon replica, are used in this study. It is found that the use of this linear signal results in a very even density throughout the precipitates. However, when deconvolution is omitted, a slight density reduction is observed in the cores of the precipitates (a so-called cupping artefact). Additionally, it is clearly demonstrated that the 3D morphology is much better reproduced using the DLET algorithm, with very little elongation in the missing wedge direction. It is therefore concluded that reliable elementally sensitive tilt tomography using EELS requires the appropriate use of DualEELS together with a suitable reconstruction algorithm, such as the compressed sensing based reconstruction algorithm used here, to make the best use of the limited data volume and signal to noise inherent in core-loss EELS. - Highlights: • DualEELS is essential for chemically sensitive electron tomography using EELS. • A new compressed sensing based algorithm (DLET) gives high fidelity reconstruction. • This combination of DualEELS and DLET will give reliable results from few projections.
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Medical Image Compression Based on Region of Interest, With Application to Colon CT Images
National Research Council Canada - National Science Library
Gokturk, Salih
2001-01-01
...., in diagnostically important regions. This paper discusses a hybrid model of lossless compression in the region of interest, with high-rate, motion-compensated, lossy compression in other regions...
Convolutional coding techniques for data protection
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
Compressive Strength of EN AC-44200 Based Composite Materials Strengthened with α-Al2O3 Particles
Directory of Open Access Journals (Sweden)
Kurzawa A.
2017-06-01
Full Text Available The paper presents results of compressive strength investigations of EN AC-44200 based aluminum alloy composite materials reinforced with aluminum oxide particles at ambient and at temperatures of 100, 200 and 250°C. They were manufactured by squeeze casting of the porous preforms made of α-Al2O3 particles with liquid aluminum alloy EN AC-44200. The composite materials were reinforced with preforms characterized by the porosities of 90, 80, 70 and 60 vol. %, thus the alumina content in the composite materials was 10, 20, 30 and 40 vol.%. The results of the compressive strength of manufactured materials were presented and basing on the microscopic observations the effect of the volume content of strengthening alumina particles on the cracking mechanisms during compression at indicated temperatures were shown and discussed. The highest compressive strength of 470 MPa at ambient temperature showed composite materials strengthened with 40 vol.% of α-Al2O3 particles.
CoGI: Towards Compressing Genomes as an Image.
Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong
2015-01-01
Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.
Holland, Katharina; Sechopoulos, Ioannis; Mann, Ritse M.; Den Heeten, Gerard J.; van Gils, Carla H.; Karssemeijer, Nico
2017-01-01
Background: In mammography, breast compression is applied to reduce the thickness of the breast. While it is widely accepted that firm breast compression is needed to ensure acceptable image quality, guidelines remain vague about how much compression should be applied during mammogram acquisition. A
Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.
Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting
2012-09-01
In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.
Directory of Open Access Journals (Sweden)
Aihua Liu
2017-01-01
Full Text Available A method of direction-of-arrival (DOA estimation using array interpolation is proposed in this paper to increase the number of resolvable sources and improve the DOA estimation performance for coprime array configuration with holes in its virtual array. The virtual symmetric nonuniform linear array (VSNLA of coprime array signal model is introduced, with the conventional MUSIC with spatial smoothing algorithm (SS-MUSIC applied on the continuous lags in the VSNLA; the degrees of freedom (DoFs for DOA estimation are obviously not fully exploited. To effectively utilize the extent of DoFs offered by the coarray configuration, a compressing sensing based array interpolation algorithm is proposed. The compressing sensing technique is used to obtain the coarse initial DOA estimation, and a modified iterative initial DOA estimation based interpolation algorithm (IMCA-AI is then utilized to obtain the final DOA estimation, which maps the sample covariance matrix of the VSNLA to the covariance matrix of a filled virtual symmetric uniform linear array (VSULA with the same aperture size. The proposed DOA estimation method can efficiently improve the DOA estimation performance. The numerical simulations are provided to demonstrate the effectiveness of the proposed method.
Xu, Li; Shan, Lin; Adachi, Fumiyuki
2014-01-01
In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods. PMID:24983012
Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks.
Arunraja, Muruganantham; Malathi, Veluchamy; Sakthivel, Erulappan
2015-11-01
Wireless sensor networks are engaged in various data gathering applications. The major bottleneck in wireless data gathering systems is the finite energy of sensor nodes. By conserving the on board energy, the life span of wireless sensor network can be well extended. Data communication being the dominant energy consuming activity of wireless sensor network, data reduction can serve better in conserving the nodal energy. Spatial and temporal correlation among the sensor data is exploited to reduce the data communications. Data similar cluster formation is an effective way to exploit spatial correlation among the neighboring sensors. By sending only a subset of data and estimate the rest using this subset is the contemporary way of exploiting temporal correlation. In Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks, we construct data similar iso-clusters with minimal communication overhead. The intra-cluster communication is reduced using adaptive-normalized least mean squares based dual prediction framework. The cluster head reduces the inter-cluster data payload using a lossless compressive forwarding technique. The proposed work achieves significant data reduction in both the intra-cluster and the inter-cluster communications, with the optimal data accuracy of collected data. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Symmetrical compression distance for arrhythmia discrimination in cloud-based big-data services.
Lillo-Castellano, J M; Mora-Jiménez, I; Santiago-Mozos, R; Chavarría-Asso, F; Cano-González, A; García-Alberola, A; Rojo-Álvarez, J L
2015-07-01
The current development of cloud computing is completely changing the paradigm of data knowledge extraction in huge databases. An example of this technology in the cardiac arrhythmia field is the SCOOP platform, a national-level scientific cloud-based big data service for implantable cardioverter defibrillators. In this scenario, we here propose a new methodology for automatic classification of intracardiac electrograms (EGMs) in a cloud computing system, designed for minimal signal preprocessing. A new compression-based similarity measure (CSM) is created for low computational burden, so-called weighted fast compression distance, which provides better performance when compared with other CSMs in the literature. Using simple machine learning techniques, a set of 6848 EGMs extracted from SCOOP platform were classified into seven cardiac arrhythmia classes and one noise class, reaching near to 90% accuracy when previous patient arrhythmia information was available and 63% otherwise, hence overcoming in all cases the classification provided by the majority class. Results show that this methodology can be used as a high-quality service of cloud computing, providing support to physicians for improving the knowledge on patient diagnosis.
Secure biometric image sensor and authentication scheme based on compressed sensing.
Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki
2013-11-20
It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.
A Novel Object Tracking Algorithm Based on Compressed Sensing and Entropy of Information
Directory of Open Access Journals (Sweden)
Ding Ma
2015-01-01
Full Text Available Object tracking has always been a hot research topic in the field of computer vision; its purpose is to track objects with specific characteristics or representation and estimate the information of objects such as their locations, sizes, and rotation angles in the current frame. Object tracking in complex scenes will usually encounter various sorts of challenges, such as location change, dimension change, illumination change, perception change, and occlusion. This paper proposed a novel object tracking algorithm based on compressed sensing and information entropy to address these challenges. First, objects are characterized by the Haar (Haar-like and ORB features. Second, the dimensions of computation space of the Haar and ORB features are effectively reduced through compressed sensing. Then the above-mentioned features are fused based on information entropy. Finally, in the particle filter framework, an object location was obtained by selecting candidate object locations in the current frame from the local context neighboring the optimal locations in the last frame. Our extensive experimental results demonstrated that this method was able to effectively address the challenges of perception change, illumination change, and large area occlusion, which made it achieve better performance than existing approaches such as MIL and CT.
Performance of FSO-OFDM based on BCH code
Directory of Open Access Journals (Sweden)
Jiao Xiao-lu
2016-01-01
Full Text Available As contrasted with the traditional OOK (on-off key system, FSO-OFDM system can resist the atmospheric scattering and improve the spectrum utilization rate effectively. Due to the instability of the atmospheric channel, the system will be affected by various factors, and resulting in a high BER. BCH code has a good error correcting ability, particularly in the short-length and medium-length code, and its performance is close to the theoretical value. It not only can check the burst errors but also can correct the random errors. Therefore, the BCH code is applied to the system to reduce the system BER. At last, the semi-physical simulation has been conducted with MATLAB. The simulation results show that when the BER is 10-2, the performance of OFDM is superior 4dB compared with OOK. In different weather conditions (extension rain, advection fog, dust days, when the BER is 10-5, the performance of BCH (255,191 channel coding is superior 4~5dB compared with uncoded system. All in all, OFDM technology and BCH code can reduce the system BER.
International Nuclear Information System (INIS)
Xu, Yun-Chao; Chen, Qun
2013-01-01
The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases
Construction of Quasi-Cyclic LDPC Codes Based on Fundamental Theorem of Arithmetic
Directory of Open Access Journals (Sweden)
Hai Zhu
2018-01-01
Full Text Available Quasi-cyclic (QC LDPC codes play an important role in 5G communications and have been chosen as the standard codes for 5G enhanced mobile broadband (eMBB data channel. In this paper, we study the construction of QC LDPC codes based on an arbitrary given expansion factor (or lifting degree. First, we analyze the cycle structure of QC LDPC codes and give the necessary and sufficient condition for the existence of short cycles. Based on the fundamental theorem of arithmetic in number theory, we divide the integer factorization into three cases and present three classes of QC LDPC codes accordingly. Furthermore, a general construction method of QC LDPC codes with girth of at least 6 is proposed. Numerical results show that the constructed QC LDPC codes perform well over the AWGN channel when decoded with the iterative algorithms.
Fast lossless compression via cascading Bloom filters.
Rozov, Roye; Shamir, Ron; Halperin, Eran
2014-01-01
Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space
Upgrade of the SLAC SLED II Pulse Compression System Based on Recent High Power Tests
International Nuclear Information System (INIS)
Vlieks, A.E.; Fowkes, W.R.; Loewen, R.J.; Tantawi, S.G.
2011-01-01
In the Next Linear Collider (NLC) it is expected that the high power rf components be able to handle peak power levels in excess of 400 MW. We present recent results of high power tests designed to investigate the RF breakdown limits of the X-band pulse compression system used at SLAC. (SLED-II). Results of these tests show that both the TE 01 -TE 10 mode converter and the 4-port hybrid have a maximum useful power limit of 220-250 MW. Based on these tests, modifications of these components have been undertaken to improve their peak field handling capability. Results of these modifications will be presented. As part of an international effort to develop a new 0.5-1.5 TeV electron-positron linear collider for the 21st century, SLAC has been working towards a design, referred to as 'The Next Linear Collider' (NLC), which will operate at 11.424 GHz and utilize 50-75 MW klystrons as rf power sources. One of the major challenges in this design, or any other design, is how to generate and efficiently transport extremely high rf power from a source to an accelerator structure. SLAC has been investigating various methods of 'pulse compressing' a relatively wide rf pulse ((ge) 1 μs) from a klystron into a narrower, but more intense, pulse. Currently a SLED-II pulse compression scheme is being used at SLAC in the NLC Test Accelerator (NLCTA) and in the Accelerator Structures Test Area (ASTA) to provide high rf power for accelerator and component testing. In ASTA, a 1.05 μs pulse from a 50 MW klystron was successfully pulse compressed to 205 MW with a pulse width of 150 ns. Since operation in NLC will require generating and transporting rf power in excess of 400 MW it was decided to test the breakdown limits of the SLED-II rf components in ASTA with rf power up to the maximum available of 400 MW. This required the combining of power from two 50 MW klystrons and feeding the summed power into the SLED-II pulse compressor. Results from this experiment demonstrated that two of
Nieh, Ta-Chun; Yang, Chao-Chin; Huang, Jen-Fa
2011-08-01
A complete complementary/prime/shifted prime (CPS) code family for the optical code-division multiple-access (OCDMA) system is proposed. Based on the ability of complete complementary (CC) code, the multiple-access interference (MAI) can be suppressed and eliminated via spectral amplitude coding (SAC) OCDMA system under asynchronous/synchronous transmission. By utilizing the shifted prime (SP) code in the SAC scheme, the hardware implementation of encoder/decoder can be simplified with a reduced number of optical components, such as arrayed waveguide grating (AWG) and fiber Bragg grating (FBG). This system has a superior performance as compared to previous bipolar-bipolar coding OCDMA systems.
Experimental data bases useful for quantification of model uncertainties in best estimate codes
International Nuclear Information System (INIS)
Wilson, G.E.; Katsma, K.R.; Jacobson, J.L.; Boodry, K.S.
1988-01-01
A data base is necessary for assessment of thermal hydraulic codes within the context of the new NRC ECCS Rule. Separate effect tests examine particular phenomena that may be used to develop and/or verify models and constitutive relationships in the code. Integral tests are used to demonstrate the capability of codes to model global characteristics and sequence of events for real or hypothetical transients. The nuclear industry has developed a large experimental data base of fundamental nuclear, thermal-hydraulic phenomena for code validation. Given a particular scenario, and recognizing the scenario's important phenomena, selected information from this data base may be used to demonstrate applicability of a particular code to simulate the scenario and to determine code model uncertainties. LBLOCA experimental data bases useful to this objective are identified in this paper. 2 tabs
Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-03-01
A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.
Development of Monte Carlo-based pebble bed reactor fuel management code
International Nuclear Information System (INIS)
Setiadipura, Topan; Obara, Toru
2014-01-01
Highlights: • A new Monte Carlo-based fuel management code for OTTO cycle pebble bed reactor was developed. • The double-heterogeneity was modeled using statistical method in MVP-BURN code. • The code can perform analysis of equilibrium and non-equilibrium phase. • Code-to-code comparisons for Once-Through-Then-Out case were investigated. • Ability of the code to accommodate the void cavity was confirmed. - Abstract: A fuel management code for pebble bed reactors (PBRs) based on the Monte Carlo method has been developed in this study. The code, named Monte Carlo burnup analysis code for PBR (MCPBR), enables a simulation of the Once-Through-Then-Out (OTTO) cycle of a PBR from the running-in phase to the equilibrium condition. In MCPBR, a burnup calculation based on a continuous-energy Monte Carlo code, MVP-BURN, is coupled with an additional utility code to be able to simulate the OTTO cycle of PBR. MCPBR has several advantages in modeling PBRs, namely its Monte Carlo neutron transport modeling, its capability of explicitly modeling the double heterogeneity of the PBR core, and its ability to model different axial fuel speeds in the PBR core. Analysis at the equilibrium condition of the simplified PBR was used as the validation test of MCPBR. The calculation results of the code were compared with the results of diffusion-based fuel management PBR codes, namely the VSOP and PEBBED codes. Using JENDL-4.0 nuclide library, MCPBR gave a 4.15% and 3.32% lower k eff value compared to VSOP and PEBBED, respectively. While using JENDL-3.3, MCPBR gave a 2.22% and 3.11% higher k eff value compared to VSOP and PEBBED, respectively. The ability of MCPBR to analyze neutron transport in the top void of the PBR core and its effects was also confirmed
Content layer progressive coding of digital maps
DEFF Research Database (Denmark)
Forchhammer, Søren; Jensen, Ole Riis
2000-01-01
A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the WWW. Progressive encoding is achieved by separating the image into content layers based on other predefined information. Information from...... already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bi-level coding, context collapsing methods for multi-level images and arithmetic coding. Relative pixel patterns are used to collapse contexts. The number of contexts are analyzed....... The new methods outperform existing coding schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 60-70% on our layered test images....
A New Algorithm of Shape Boundaries Based on Chain Coding
Directory of Open Access Journals (Sweden)
Zhao Xin
2017-01-01
Full Text Available A new method to obtain connected component in binary images is presented. The method uses DFA automaton to obtain chain code and label the component boundary. It is theoretically proved that the algorithm improves the image encoding efficiency closer to the lowest time consumption.
Computer code for double beta decay QRPA based calculations
Energy Technology Data Exchange (ETDEWEB)
Barbero, C. A.; Mariano, A. [Departamento de Física, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, La Plata, Argentina and Instituto de Física La Plata, CONICET, La Plata (Argentina); Krmpotić, F. [Instituto de Física La Plata, CONICET, La Plata, Argentina and Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo (Brazil); Samana, A. R.; Ferreira, V. dos Santos [Departamento de Ciências Exatas e Tecnológicas, Universidade Estadual de Santa Cruz, BA (Brazil); Bertulani, C. A. [Department of Physics, Texas A and M University-Commerce, Commerce, TX (United States)
2014-11-11
The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β{sup ±} processes, is extended to include also the nuclear double beta decay.
Interpretation and code generation based on intermediate languages
DEFF Research Database (Denmark)
Kornerup, Peter; Kristensen, Bent Bruun; Madsen, Ole Lehrmann
1980-01-01
The possibility of supporting high level languages through intermediate languages to be used for direct interpretation and as intermediate forms in compilers is investigated. An accomplished project in the construction of an interpreter and a code generator using one common intermediate form...
Directory of Open Access Journals (Sweden)
Sheng Bi
2016-03-01
Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.
Directory of Open Access Journals (Sweden)
Shailendra Singh Chauhan
2016-09-01
Full Text Available A dew point evaporative-vapour compression based combined air conditioning system for providing good human comfort conditions at a low cost has been proposed in this paper. The proposed system has been parametrically analysed for a wide range of ambient temperatures and specific humidity under some reasonable assumptions. The proposed system has also been compared from the conventional vapour compression air conditioner on the basis of cooling load on the cooling coil working on 100% fresh air assumption. The saving of cooling load on the coil was found to be maximum with a value of 60.93% at 46 °C and 6 g/kg specific humidity, while it was negative for very high humidity of ambient air, which indicates that proposed system is applicable for dry and moderate humid conditions but not for very humid conditions. The system is working well with an average net monthly power saving of 192.31 kW h for hot and dry conditions and 124.38 kW h for hot and moderate humid conditions. Therefore it could be a better alternative for dry and moderate humid climate with a payback period of 7.2 years.
Vishnukumar, S.; Wilscy, M.
2017-12-01
In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.
Optical scanning holography based on compressive sensing using a digital micro-mirror device
A-qian, Sun; Ding-fu, Zhou; Sheng, Yuan; You-jun, Hu; Peng, Zhang; Jian-ming, Yue; xin, Zhou
2017-02-01
Optical scanning holography (OSH) is a distinct digital holography technique, which uses a single two-dimensional (2D) scanning process to record the hologram of a three-dimensional (3D) object. Usually, these 2D scanning processes are in the form of mechanical scanning, and the quality of recorded hologram may be affected due to the limitation of mechanical scanning accuracy and unavoidable vibration of stepper motor's start-stop. In this paper, we propose a new framework, which replaces the 2D mechanical scanning mirrors with a Digital Micro-mirror Device (DMD) to modulate the scanning light field, and we call it OSH based on Compressive Sensing (CS) using a digital micro-mirror device (CS-OSH). CS-OSH can reconstruct the hologram of an object through the use of compressive sensing theory, and then restore the image of object itself. Numerical simulation results confirm this new type OSH can get a reconstructed image with favorable visual quality even under the condition of a low sample rate.
Segmentation-driven compound document coding based on H.264/AVC-INTRA.
Zaghetto, Alexandre; de Queiroz, Ricardo L
2007-07-01
In this paper, we explore H.264/AVC operating in intraframe mode to compress a mixed image, i.e., composed of text, graphics, and pictures. Even though mixed contents (compound) documents usually require the use of multiple compressors, we apply a single compressor for both text and pictures. For that, distortion is taken into account differently between text and picture regions. Our approach is to use a segmentation-driven adaptation strategy to change the H.264/AVC quantization parameter on a macroblock by macroblock basis, i.e., we deviate bits from pictorial regions to text in order to keep text edges sharp. We show results of a segmentation driven quantizer adaptation method applied to compress documents. Our reconstructed images have better text sharpness compared to straight unadapted coding, at negligible visual losses on pictorial regions. Our results also highlight the fact that H.264/AVC-INTRA outperforms coders such as JPEG-2000 as a single coder for compound images.
Novel prediction- and subblock-based algorithm for fractal image compression
International Nuclear Information System (INIS)
Chung, K.-L.; Hsu, C.-H.
2006-01-01
Fractal encoding is the most consuming part in fractal image compression. In this paper, a novel two-phase prediction- and subblock-based fractal encoding algorithm is presented. Initially the original gray image is partitioned into a set of variable-size blocks according to the S-tree- and interpolation-based decomposition principle. In the first phase, each current block of variable-size range block tries to find the best matched domain block based on the proposed prediction-based search strategy which utilizes the relevant neighboring variable-size domain blocks. The first phase leads to a significant computation-saving effect. If the domain block found within the predicted search space is unacceptable, in the second phase, a subblock strategy is employed to partition the current variable-size range block into smaller blocks to improve the image quality. Experimental results show that our proposed prediction- and subblock-based fractal encoding algorithm outperforms the conventional full search algorithm and the recently published spatial-correlation-based algorithm by Truong et al. in terms of encoding time and image quality. In addition, the performance comparison among our proposed algorithm and the other two algorithms, the no search-based algorithm and the quadtree-based algorithm, are also investigated
A Low-Jitter Wireless Transmission Based on Buffer Management in Coding-Aware Routing
Directory of Open Access Journals (Sweden)
Cunbo Lu
2015-08-01
Full Text Available It is significant to reduce packet jitter for real-time applications in a wireless network. Existing coding-aware routing algorithms use the opportunistic network coding (ONC scheme in a packet coding algorithm. The ONC scheme never delays packets to wait for the arrival of a future coding opportunity. The loss of some potential coding opportunities may degrade the contribution of network coding to jitter performance. In addition, most of the existing coding-aware routing algorithms assume that all flows participating in the network have equal rate. This is unrealistic, since multi-rate environments often appear. To overcome the above problem and expand coding-aware routing to multi-rate scenarios, from the view of data transmission, we present a low-jitter wireless transmission algorithm based on buffer management (BLJCAR, which decides packets in coding node according to the queue-length based threshold policy instead of the regular ONC policy as used in existing coding-aware routing algorithms. BLJCAR is a unified framework to merge the single rate case and multiple rate case. Simulations results show that the BLJCAR algorithm embedded in coding-aware routing outperforms the traditional ONC policy in terms of jitter, packet delivery delay, packet loss ratio and network throughput in network congestion in any traffic rates.
Efficient predictive algorithms for image compression
Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla
2017-01-01
This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
Evaluation of a new image compression technique
International Nuclear Information System (INIS)
Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.
1988-01-01
The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen
Four year-olds use norm-based coding for face identity.
Jeffery, Linda; Read, Ainsley; Rhodes, Gillian
2013-05-01
Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged children also use norm-based coding. We reasoned that the transition to school could be critical in developing a norm-based system because school places new demands on children's face identification skills and substantially increases experience with faces. Consistent with this view, face identification performance improves steeply between ages 4 and 7. We used face identity aftereffects to test whether norm-based coding emerges between these ages. We found that 4 year-old children, like adults, showed larger face identity aftereffects for adaptors far from the average than for adaptors closer to the average, consistent with use of norm-based coding. We conclude that experience prior to age 4 is sufficient to develop a norm-based face-space and that failure to use norm-based coding cannot explain 4 year-old children's poor face identification skills. Copyright © 2013 Elsevier B.V. All rights reserved.
Security Concerns and Countermeasures in Network Coding Based Communications Systems
DEFF Research Database (Denmark)
Talooki, Vahid; Bassoli, Riccardo; Roetter, Daniel Enrique Lucani
2015-01-01
key protocol types, namely, state-aware and stateless protocols, specifying the benefits and disadvantages of each one of them. We also present the key security assumptions of network coding (NC) systems as well as a detailed analysis of the security goals and threats, both passive and active......This survey paper shows the state of the art in security mechanisms, where a deep review of the current research and the status of this topic is carried out. We start by introducing network coding and its variety applications in enhancing current traditional networks. In particular, we analyze two....... This paper also presents a detailed taxonomy and a timeline of the different NC security mechanisms and schemes reported in the literature. Current proposed security mechanisms and schemes for NC in the literature are classified later. Finally a timeline of these mechanism and schemes is presented....
Opportunistic quantum network coding based on quantum teleportation
Shang, Tao; Du, Gang; Liu, Jian-wei
2016-04-01
It seems impossible to endow opportunistic characteristic to quantum network on the basis that quantum channel cannot be overheard without disturbance. In this paper, we propose an opportunistic quantum network coding scheme by taking full advantage of channel characteristic of quantum teleportation. Concretely, it utilizes quantum channel for secure transmission of quantum states and can detect eavesdroppers by means of quantum channel verification. What is more, it utilizes classical channel for both opportunistic listening to neighbor states and opportunistic coding by broadcasting measurement outcome. Analysis results show that our scheme can reduce the times of transmissions over classical channels for relay nodes and can effectively defend against classical passive attack and quantum active attack.
A Biologically Based Approach to the Mutation of Code
1999-09-01
instructions called a genome. This genome contains the master blueprint for all cellular structures and functions within the organism for the duration of...structures known as chromosomes, which are found in the nucleus of all non-somatic cells. Many procaryotic organisms have single-stranded DNA. An...coding sequence of a gene, or by an aberrant cellular recombination process. One way to reduce the chances of a harmful mutation occuring is to
Vector and Raster Data Storage Based on Morton Code
Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.
2018-05-01
Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.
QR code based noise-free optical encryption and decryption of a gray scale image
Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-03-01
In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.
Directory of Open Access Journals (Sweden)
Isaac Caicedo-Castro
2014-01-01
Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.
Directory of Open Access Journals (Sweden)
Ricky Akbar
2017-09-01
Full Text Available Perusahaan Listrik Negara (PLN has implemented a smart electricity system or prepaid electricity. The customers pay the electricity voucher first before use the electricity. The token contained in electricity voucher that has been purchased by the customer is inserted into the Meter Prabayar (MPB installed in the location of customers. When a customer purchases a voucher, it will get a receipt that contains all of the customer's identity and the 20-digit of voucher code (token to be entered into MPB as a substitute for electrical energy credit. Receipts obtained by the customer is certainly vulnerable to loss, or hijacked by unresponsible parties. In this study, authors designed and develop an android based application by utilizing QR code technology as a replacement for the receipt of prepaid electricity credit which contains the identity of the customer and the 20-digit voucher code. The application is developed by implemented waterfall methodology. The implementation process of the waterfall methods used, are (1 analysis of functional requirement of the system by conducting a preliminary study and data collection based on field studies and literature, (2 system design by using UML diagrams and Business Process Model Notation (BPMN and Entity Relationship diagram (ERD, (3 design implementation by using OOP (Object Oriented programming technique. Web application is developed by using laravel PHP framework and database MySQL while mobile application is developed by using B4A (4 developed system is tested by using blackbox method testing. Final result of this research is a Web and mobile applications for the sale of electricityvoucher by QR Code technology.
Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery
Directory of Open Access Journals (Sweden)
Lingjun Liu
2017-01-01
Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.
A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations
Energy Technology Data Exchange (ETDEWEB)
Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch [University of Applied Sciences and Arts Northwestern Switzerland FHNW, 5210 Windisch (Switzerland)
2017-11-01
One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.
Optical image encryption scheme with multiple light paths based on compressive ghost imaging
Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan
2018-02-01
An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.
Micro-Doppler Ambiguity Resolution Based on Short-Time Compressed Sensing
Directory of Open Access Journals (Sweden)
Jing-bo Zhuang
2015-01-01
Full Text Available When using a long range radar (LRR to track a target with micromotion, the micro-Doppler embodied in the radar echoes may suffer from ambiguity problem. In this paper, we propose a novel method based on compressed sensing (CS to solve micro-Doppler ambiguity. According to the RIP requirement, a sparse probing pulse train with its transmitting time random is designed. After matched filtering, the slow-time echo signals of the micromotion target can be viewed as randomly sparse sampling of Doppler spectrum. Select several successive pulses to form a short-time window and the CS sensing matrix can be built according to the time stamps of these pulses. Then performing Orthogonal Matching Pursuit (OMP, the unambiguous micro-Doppler spectrum can be obtained. The proposed algorithm is verified using the echo signals generated according to the theoretical model and the signals with micro-Doppler signature produced using the commercial electromagnetic simulation software FEKO.
A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI
International Nuclear Information System (INIS)
Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G; Odille, F; Atkinson, D
2011-01-01
Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved. (note)
A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations
Felix, Simon; Bolzern, Roman; Battaglia, Marina
2017-11-01
One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.
Methods of compression of digital holograms, based on 1-level wavelet transform
International Nuclear Information System (INIS)
Kurbatova, E A; Cheremkhin, P A; Evtikhiev, N N
2016-01-01
To reduce the size of memory required for storing information about 3D-scenes and to decrease the rate of hologram transmission, digital hologram compression can be used. Compression of digital holograms by wavelet transforms is among most powerful methods. In the paper the most popular wavelet transforms are considered and applied to the digital hologram compression. Obtained values of reconstruction quality and hologram's diffraction efficiencies are compared. (paper)
Double-compression method for biomedical images
Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana
2017-08-01
This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.
A low-cost hybrid drivetrain concept based on compressed air energy storage
International Nuclear Information System (INIS)
Brown, T.L.; Atluri, V.P.; Schmiedeler, J.P.
2014-01-01
Highlights: • A new pneumatic hybrid concept is introduced. • A proof-of-concept prototype system is built and tested. • The experimental system has a round-trip efficiency of just under 10%. • A thermodynamics model is used to predict the performance of modified designs. • An efficiency of nearly 50% is possible with reasonable design changes. - Abstract: This paper introduces a new low-cost hybrid drivetrain concept based on compressed air energy storage. In contrast to most contemporary approaches to pneumatic hybridization, which require modification to the primary power plant, this concept is based on a stand-alone pneumatic system that could be readily integrated with existing vehicles. The pneumatic system consists of an air tank and a compressor–expander that is coupled to the rest of the drivetrain via an infinitely variable transmission. Rather than incorporating more expensive technologies such as variable valve timing or a variable compression ratio compressor, a fixed valve system consisting of a rotary valve and passive check valves is optimized to operate efficiently over a range of tank pressures. The feasibility of this approach is established by thermodynamic modeling and the construction of a proof-of-concept prototype, which is also used to fine tune model parameters. While the proof-of-concept system shows a round trip efficiency of just under 10%, modeling shows that a round trip efficiency of 26% is possible with a revised design. If waste heat from the engine is used to maintain an elevated tank temperature, efficiencies of nearly 50% may be possible, indicating that the concept could be effective for practical hybridization of passenger vehicles
Effective Low-Power Wearable Wireless Surface EMG Sensor Design Based on Analog-Compressed Sensing
Directory of Open Access Journals (Sweden)
Mohammadreza Balouchestani
2014-12-01
Full Text Available Surface Electromyography (sEMG is a non-invasive measurement process that does not involve tools and instruments to break the skin or physically enter the body to investigate and evaluate the muscular activities produced by skeletal muscles. The main drawbacks of existing sEMG systems are: (1 they are not able to provide real-time monitoring; (2 they suffer from long processing time and low speed; (3 they are not effective for wireless healthcare systems because they consume huge power. In this work, we present an analog-based Compressed Sensing (CS architecture, which consists of three novel algorithms for design and implementation of wearable wireless sEMG bio-sensor. At the transmitter side, two new algorithms are presented in order to apply the analog-CS theory before Analog to Digital Converter (ADC. At the receiver side, a robust reconstruction algorithm based on a combination of ℓ1-ℓ1-optimization and Block Sparse Bayesian Learning (BSBL framework is presented to reconstruct the original bio-signals from the compressed bio-signals. The proposed architecture allows reducing the sampling rate to 25% of Nyquist Rate (NR. In addition, the proposed architecture reduces the power consumption to 40%, Percentage Residual Difference (PRD to 24%, Root Mean Squared Error (RMSE to 2%, and the computation time from 22 s to 9.01 s, which provide good background for establishing wearable wireless healthcare systems. The proposed architecture achieves robust performance in low Signal-to-Noise Ratio (SNR for the reconstruction process.
A Mechanism to Avoid Collusion Attacks Based on Code Passing in Mobile Agent Systems
Jaimez, Marc; Esparza, Oscar; Muñoz, Jose L.; Alins-Delgado, Juan J.; Mata-Díaz, Jorge
Mobile agents are software entities consisting of code, data, state and itinerary that can migrate autonomously from host to host executing their code. Despite its benefits, security issues strongly restrict the use of code mobility. The protection of mobile agents against the attacks of malicious hosts is considered the most difficult security problem to solve in mobile agent systems. In particular, collusion attacks have been barely studied in the literature. This paper presents a mechanism that avoids collusion attacks based on code passing. Our proposal is based on a Multi-Code agent, which contains a different variant of the code for each host. A Trusted Third Party is responsible for providing the information to extract its own variant to the hosts, and for taking trusted timestamps that will be used to verify time coherence.
Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.
Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R
2006-02-28
The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.
Web- and system-code based, interactive, nuclear power plant simulators
International Nuclear Information System (INIS)
Kim, K. D.; Jain, P.; Rizwan, U.
2006-01-01
Using two different approaches, on-line, web- and system-code based graphical user interfaces have been developed for reactor system analysis. Both are LabVIEW (graphical programming language developed by National Instruments) based systems that allow local users as well as those at remote sites to run, interact and view the results of the system code in a web browser. In the first approach, only the data written by the system code in a tab separated ASCII output file is accessed and displayed graphically. In the second approach, LabVIEW virtual instruments are coupled with the system code as dynamic link libraries (DLL). RELAP5 is used as the system code to demonstrate the capabilities of these approaches. From collaborative projects between teams in geographically remote locations to providing system code experience to distance education students, these tools can be very beneficial in many areas of teaching and R and D. (authors)
On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes
DEFF Research Database (Denmark)
Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde
2013-01-01
We derive the Wu list-decoding algorithm for generalized Reed–Solomon (GRS) codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp–Massey algorithm. We present a novel method for constructing the interpolation polynomial fast. We gi...... and a duality in the choice of parameters needed for decoding, both in the case of GRS codes and in the case of Goppa codes....
Energy-Efficient Cluster Based Routing Protocol in Mobile Ad Hoc Networks Using Network Coding
Srinivas Kanakala; Venugopal Reddy Ananthula; Prashanthi Vempaty
2014-01-01
In mobile ad hoc networks, all nodes are energy constrained. In such situations, it is important to reduce energy consumption. In this paper, we consider the issues of energy efficient communication in MANETs using network coding. Network coding is an effective method to improve the performance of wireless networks. COPE protocol implements network coding concept to reduce number of transmissions by mixing the packets at intermediate nodes. We incorporate COPE into cluster based routing proto...
Simonaitis, Linas; McDonald, Clement J
2009-10-01
The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.
Optimization of compressive strength in admixture-reinforced cement-based grouts
Directory of Open Access Journals (Sweden)
Sahin Zaimoglu, A.
2007-12-01
Full Text Available The Taguchi method was used in this study to optimize the unconfined (7-, 14- and 28-day compressive strength of cement-based grouts with bentonite, fly ash and silica fume admixtures. The experiments were designed using an L16 orthogonal array in which the three factors considered were bentonite (0%, 0.5%, 1.0% and 3%, fly ash (10%, 20%, 30% and 40% and silica fume (0%, 5%, 10% and 20% content. The experimental results, which were analyzed by ANOVA and the Taguchi method, showed that fly ash and silica fume content play a significant role in unconfined compressive strength. The optimum conditions were found to be: 0% bentonite, 10% fly ash, 20% silica fume and 28 days of curing time. The maximum unconfined compressive strength reached under the above optimum conditions was 17.1 MPa.En el presente trabajo se ha intentado optimizar, mediante el método de Taguchi, las resistencias a compresión (a las edades de 7, 14 y 28 días de lechadas de cemento reforzadas con bentonita, cenizas volantes y humo de sílice. Se diseñaron los experimentos de acuerdo con un arreglo ortogonal tipo L16 en el que se contemplaban tres factores: la bentonita (0, 0,5, 1 y 3%, las cenizas volantes (10, 20, 30 y 40% y el humo de sílice (0, 5, 10 y 20% (porcentajes en peso del sólido. Los datos obtenidos se analizaron con mediante ANOVA y el método de Taguchi. De acuerdo con los resultados experimentales, el contenido tanto de cenizas volantes como de humo de sílice desempeña un papel significativo en la resistencia a compresión. Por otra parte, las condiciones óptimas que se han identificado son: 0% bentonita, 10% cenizas volantes, 20% humo de sílice y 28 días de tiempo de curado. La resistencia a compresión máxima conseguida en las anteriores condiciones era de 17,1 MPa.
Quantum secret sharing based on quantum error-correcting codes
International Nuclear Information System (INIS)
Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu
2011-01-01
Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k − 1, 1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k − 1) threshold scheme. It also takes advantage of classical enhancement of the [2k − 1, 1, k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels. (general)
Constrained motion estimation-based error resilient coding for HEVC
Guo, Weihan; Zhang, Yongfei; Li, Bo
2018-04-01
Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.
Probabilistic hypergraph based hash codes for social image search
Institute of Scientific and Technical Information of China (English)
Yi XIE; Hui-min YU; Roland HU
2014-01-01
With the rapid development of the Internet, recent years have seen the explosive growth of social media. This brings great challenges in performing efficient and accurate image retrieval on a large scale. Recent work shows that using hashing methods to embed high-dimensional image features and tag information into Hamming space provides a powerful way to index large collections of social images. By learning hash codes through a spectral graph partitioning algorithm, spectral hashing (SH) has shown promising performance among various hashing approaches. However, it is incomplete to model the relations among images only by pairwise simple graphs which ignore the relationship in a higher order. In this paper, we utilize a probabilistic hypergraph model to learn hash codes for social image retrieval. A probabilistic hypergraph model offers a higher order repre-sentation among social images by connecting more than two images in one hyperedge. Unlike a normal hypergraph model, a probabilistic hypergraph model considers not only the grouping information, but also the similarities between vertices in hy-peredges. Experiments on Flickr image datasets verify the performance of our proposed approach.
International Nuclear Information System (INIS)
Tang Jie; Nett, Brian E; Chen Guanghong
2009-01-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
Protograph based LDPC codes with minimum distance linearly growing with block size
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
A novel QC-LDPC code based on the finite field multiplicative group for optical communications
Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen
2013-09-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.
Content Layer progressive Coding of Digital Maps
DEFF Research Database (Denmark)
Forchhammer, Søren; Jensen, Ole Riis
2002-01-01
A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the World Wide Web. Progressive encoding is achieved by encoding the image in content layers based on color level or other predefined...... information. Information from already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bilevel coding, context collapsing methods for multilevel images and arithmetic coding. Relative pixel patterns are used to collapse contexts. Expressions...... for calculating the resulting number of contexts are given. The new methods outperform existing schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 50-70% on our layered map test images....
On the Vocabulary of Grammar-Based Codes and the Logical Consistency of Texts
L. Debowski (Lukasz Jerzy)
2008-01-01
textabstractThe article presents a new interpretation for Zipf's law in natural language which relies on two areas of information theory. We reformulate the problem of grammar-based compression and investigate properties of strongly nonergodic stationary processes. The motivation for the joint
Proposing a Web-Based Tutorial System to Teach Malay Language Braille Code to the Sighted
Wah, Lee Lay; Keong, Foo Kok
2010-01-01
The "e-KodBrailleBM Tutorial System" is a web-based tutorial system which is specially designed to teach, facilitate and support the learning of Malay Language Braille Code to individuals who are sighted. The targeted group includes special education teachers, pre-service teachers, and parents. Learning Braille code involves memorisation…
Scout-view assisted interior digital tomosynthesis (iDTS) based on compressed-sensing theory
Park, S. Y.; Kim, G. A.; Cho, H. S.; Seo, C. W.; Je, U. K.; Park, C. K.; Lim, H. W.; Kim, K. S.; Lee, D. Y.; Lee, H. W.; Kang, S. Y.; Park, J. E.; Woo, T. H.; Lee, M. S.
2017-12-01
Conventional digital tomosynthesis (DTS) based on the filtered-backprojection (FBP) reconstruction requires full field-of-view scan and also relatively dense projections, which results in still high dose for medical imaging purposes. In this work, to overcome these difficulties, we propose a new type of DTS examinations, the so-called scout-view assisted interior DTS (iDTS), in which the x-ray beam span covers only a small region-of-interest (ROI) containing target diagnosis with the help of some scout views and they are used in the reconstruction to add additional information to interior ROI otherwise absent with conventional iDTS reconstruction methods. We considered an effective iterative algorithm based on compressed-sensing theory, rather than the FBP-based algorithm, for more accurate iDTS reconstruction. We implemented the proposed algorithm, performed a systematic simulation and experiment, and investigated the image characteristics. We successfully reconstructed iDTS images of substantially high accuracy and no truncation artifacts by using the proposed method, preserving superior image homogeneity, edge sharpening, and in-plane spatial resolution.
Directory of Open Access Journals (Sweden)
Wei Ke
2017-01-01
Full Text Available In order to enhance the accuracy of sound source localization in noisy and reverberant environments, this paper proposes an adaptive sound source localization method based on distributed microphone arrays. Since sound sources lie at a few points in the discrete spatial domain, our method can exploit this inherent sparsity to convert the localization problem into a sparse recovery problem based on the compressive sensing (CS theory. In this method, a two-step discrete cosine transform- (DCT- based feature extraction approach is utilized to cover both short-time and long-time properties of acoustic signals and reduce the dimensions of the sparse model. In addition, an online dictionary learning (DL method is used to adjust the dictionary for matching the changes of audio signals, and then the sparse solution could better represent location estimations. Moreover, we propose an improved block-sparse reconstruction algorithm using approximate l0 norm minimization to enhance reconstruction performance for sparse signals in low signal-noise ratio (SNR conditions. The effectiveness of the proposed scheme is demonstrated by simulation results and experimental results where substantial improvement for localization performance can be obtained in the noisy and reverberant conditions.
A Secure Network Coding-based Data Gathering Model and Its Protocol in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Qian Xiao
2012-09-01
Full Text Available To provide security for data gathering based on network coding in wireless sensor networks (WSNs, a secure network coding-based data gathering model is proposed, and a data-privacy preserving and pollution preventing (DPPaamp;PP protocol using network coding is designed. DPPaamp;PP makes use of a new proposed pollution symbol selection and pollution (PSSP scheme based on a new obfuscation idea to pollute existing symbols. Analyses of DPPaamp;PP show that it not only requires low overhead on computation and communication, but also provides high security on resisting brute-force attacks.
3D Video Compression and Transmission
DEFF Research Database (Denmark)
Zamarin, Marco; Forchhammer, Søren
In this short paper we provide a brief introduction to 3D and multi-view video technologies - like three-dimensional television and free-viewpoint video - focusing on the aspects related to data compression and transmission. Geometric information represented by depth maps is introduced as well...... and a novel coding scheme for multi-view data able to exploit geometric information in order to improve compression performances is briefly described and compared against the classical solution based on multi-view motion estimation. Future research directions close the paper....
Directory of Open Access Journals (Sweden)
H. Prashantha Kumar
2011-09-01
Full Text Available Low density parity check (LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical Shannon limit for a memory less channel. LDPC codes are finding increasing use in applications like LTE-Networks, digital television, high density data storage systems, deep space communication systems etc. Several algebraic and combinatorial methods are available for constructing LDPC codes. In this paper we discuss a novel low complexity algebraic method for constructing regular LDPC like codes derived from full rank codes. We demonstrate that by employing these codes over AWGN channels, coding gains in excess of 2dB over un-coded systems can be realized when soft iterative decoding using a parity check tree is employed.
Compression for radiological images
Wilson, Dennis L.
1992-07-01
The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.
Leung, Chung Ming; Or, Siu Wing; Ho, S L
2013-12-01
A force sensing device capable of sensing dc (or static) compressive forces is developed based on a NAS106N stainless steel compressive spring, a sintered NdFeB permanent magnet, and a coil-wound Tb(0.3)Dy(0.7)Fe(1.92)/Pb(Zr, Ti)O3 magnetostrictive∕piezoelectric laminate. The dc compressive force sensing in the device is evaluated theoretically and experimentally and is found to originate from a unique force-induced, position-dependent, current-driven dc magnetoelectric effect. The sensitivity of the device can be increased by increasing the spring constant of the compressive spring, the size of the permanent magnet, and/or the driving current for the coil-wound laminate. Devices of low-force (20 N) and high-force (200 N) types, showing high output voltages of 262 and 128 mV peak, respectively, are demonstrated at a low driving current of 100 mA peak by using different combinations of compressive spring and permanent magnet.
ECF2: A pulsed power generator based on magnetic flux compression for K-shell radiation production
International Nuclear Information System (INIS)
L'Eplattenier, P.; Lassalle, F.; Mangeant, C.; Hamann, F.; Bavay, M.; Bayol, F.; Huet, D.; Morell, A.; Monjaux, P.; Avrillaud, G.; Lalle, B.
2002-01-01
The 3 MJ energy stored ECF2 generator is developed at Centre d'Etudes de Gramat, France, for K-shell radiation production. This generator is based on microsecond LTD stages as primary generators, and on the magnetic flux compression scheme for power amplification from the microsecond to the 100ns regime. This paper presents a general overview of the ECF2 generator. The flux compression stage, a key component, will be studied in details. We will present its advantages and drawbacks. We will then present the first experimental and numerical results which show the improvements that have already been made on this scheme
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
High efficiency video coding coding tools and specification
Wien, Mathias
2015-01-01
The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...
Low-latency video transmission over high-speed WPANs based on low-power video compression
DEFF Research Database (Denmark)
Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Ann
2010-01-01
This paper presents latency-constrained video transmission over high-speed wireless personal area networks (WPANs). Low-power video compression is proposed as an alternative to uncompressed video transmission. A video source rate control based on MINMAX quality criteria is introduced. Practical...
Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics
Directory of Open Access Journals (Sweden)
Daniel Laney
2014-01-01
Full Text Available This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. We compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.