WorldWideScience

Sample records for previously jpeg compressed

  1. Lossless Compression of JPEG Coded Photo Collections.

    Science.gov (United States)

    Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng

    2016-04-06

    The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.

  2. Comparison of JPEG and wavelet compression on intraoral digital radiographic images

    International Nuclear Information System (INIS)

    Kim, Eun Kyung

    2004-01-01

    To determine the proper image compression method and ratio without image quality degradation in intraoral digital radiographic images, comparing the discrete cosine transform (DCT)-based JPEG with the wavelet-based JPEG 2000 algorithm. Thirty extracted sound teeth and thirty extracted teeth with occlusal caries were used for this study. Twenty plaster blocks were made with three teeth each. They were radiographically exposed using CDR sensors (Schick Inc., Long Island, USA). Digital images were compressed to JPEG format, using Adobe Photoshop v. 7.0 and JPEG 2000 format using Jasper program with compression ratios of 5 : 1, 9 : 1, 14 : 1, 28 : 1 each. To evaluate the lesion detectability, receiver operating characteristic (ROC) analysis was performed by the three oral and maxillofacial radiologists. To evaluate the image quality, all the compressed images were assessed subjectively using 5 grades, in comparison to the original uncompressed images. Compressed images up to compression ratio of 14: 1 in JPEG and 28 : 1 in JPEG 2000 showed nearly the same the lesion detectability as the original images. In the subjective assessment of image quality, images up to compression ratio of 9 : 1 in JPEG and 14 : 1 in JPEG 2000 showed minute mean paired differences from the original images. The results showed that the clinically acceptable compression ratios were up to 9 : 1 for JPEG and 14 : 1 for JPEG 2000. The wavelet-based JPEG 2000 is a better compression method, comparing to DCT-based JPEG for intraoral digital radiographic images.

  3. JPEG XS-based frame buffer compression inside HEVC for power-aware video compression

    Science.gov (United States)

    Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit

    2017-09-01

    With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.

  4. Low-complexity Compression of High Dynamic Range Infrared Images with JPEG compatibility

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-01-01

    . Then we compress each image by a JPEG baseline encoder and include the residual image bit stream into the application part of JPEG header of the base image. As a result, the base image can be reconstructed by JPEG baseline decoder. If the JPEG bit stream size of the residual image is higher than the raw......-mapping, the proposed approach has lower complexity and similar rate-distortion performance on IR test images....

  5. A JPEG backward-compatible HDR image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  6. A block-based JPEG-LS compression technique with lossless region of interest

    Science.gov (United States)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    JPEG-LS lossless compression algorithm is used in many specialized applications that emphasize on the attainment of high fidelity for its lower complexity and better compression ratios than the lossless JPEG standard. But it cannot prevent error diffusion because of the context dependence of the algorithm, and have low compression rate when compared to lossy compression. In this paper, we firstly divide the image into two parts: ROI regions and non-ROI regions. Then we adopt a block-based image compression technique to decrease the range of error diffusion. We provide JPEG-LS lossless compression for the image blocks which include the whole or part region of interest (ROI) and JPEG-LS near lossless compression for the image blocks which are included in the non-ROI (unimportant) regions. Finally, a set of experiments are designed to assess the effectiveness of the proposed compression method.

  7. The effect of JPEG compression on automated detection of microaneurysms in retinal images

    Science.gov (United States)

    Cree, M. J.; Jelinek, H. F.

    2008-02-01

    As JPEG compression at source is ubiquitous in retinal imaging, and the block artefacts introduced are known to be of similar size to microaneurysms (an important indicator of diabetic retinopathy) it is prudent to evaluate the effect of JPEG compression on automated detection of retinal pathology. Retinal images were acquired at high quality and then compressed to various lower qualities. An automated microaneurysm detector was run on the retinal images of various qualities of JPEG compression and the ability to predict the presence of diabetic retinopathy based on the detected presence of microaneurysms was evaluated with receiver operating characteristic (ROC) methodology. The negative effect of JPEG compression on automated detection was observed even at levels of compression sometimes used in retinal eye-screening programmes and these may have important clinical implications for deciding on acceptable levels of compression for a fully automated eye-screening programme.

  8. JPEG2000 vs. full frame wavelet packet compression for smart card medical records.

    Science.gov (United States)

    Leehan, Joaquín Azpirox; Lerallut, Jean-Francois

    2006-01-01

    This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.

  9. Passive forgery detection using discrete cosine transform coefficient analysis in JPEG compressed images

    Science.gov (United States)

    Lin, Cheng-Shian; Tsay, Jyh-Jong

    2016-05-01

    Passive forgery detection aims to detect traces of image tampering without the need for prior information. With the increasing demand for image content protection, passive detection methods able to identify image tampering areas are increasingly needed. However, most current passive approaches either work only for image-level JPEG compression detection and cannot localize region-level forgery, or suffer from high-false detection rates in localizing altered regions. This paper proposes an effective approach based on discrete cosine transform coefficient analysis for the detection and localization of altered regions of JPEG compressed images. This approach can also work with altered JPEG images resaved in JPEG compressed format with different quality factors. Experiments with various tampering methods such as copy-and-paste, image completion, and composite tampering, show that the proposed approach is able to effectively detect and localize altered areas and is not sensitive to image contents such as edges and textures.

  10. A lossless compression method for medical image sequences using JPEG-LS and interframe coding.

    Science.gov (United States)

    Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching

    2009-09-01

    Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

  11. Privacy protection in surveillance systems based on JPEG DCT baseline compression and spectral domain watermarking

    Science.gov (United States)

    Sablik, Thomas; Velten, Jörg; Kummert, Anton

    2015-03-01

    An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.

  12. Noisy images-JPEG compressed: subjective and objective image quality evaluation

    Science.gov (United States)

    Corchs, Silvia; Gasparini, Francesca; Schettini, Raimondo

    2014-01-01

    The aim of this work is to study image quality of both single and multiply distorted images. We address the case of images corrupted by Gaussian noise or JPEG compressed as single distortion cases and images corrupted by Gaussian noise and then JPEG compressed, as multiply distortion case. Subjective studies were conducted in two parts to obtain human judgments on the single and multiply distorted images. We study how these subjective data correlate with No Reference state-of-the-art quality metrics. We also investigate proper combining of No Reference metrics to achieve better performance. Results are analyzed and compared in terms of correlation coefficients.

  13. Comparing subjective and objective quality assessment of HDR images compressed with JPEG-XT

    DEFF Research Database (Denmark)

    Mantel, Claire; Ferchiu, Stefan Catalin; Forchhammer, Søren

    2014-01-01

    In this paper a subjective test in which participants evaluate the quality of JPEG-XT compressed HDR images is presented. Results show that for the selected test images and display, the subjective quality reached its saturation point starting around 3bpp. Objective evaluations are obtained...

  14. Use of a JPEG-2000 Wavelet Compression Scheme for Content-Based Ophtalmologic Retinal Images Retrieval.

    Science.gov (United States)

    Lamard, Mathieu; Daccache, Wissam; Cazuguel, Guy; Roux, Christian; Cochener, Beatrice

    2005-01-01

    In this paper we propose a content based image retrieval method for diagnosis aid in diabetic retinopathy. We characterize images without extracting significant features, and use histograms obtained from the compressed images in JPEG-2000 wavelet scheme to build signatures. The research is carried out by calculating signature distances between the query and database images. A weighted distance between histograms is used. Retrieval efficiency is given for different standard types of JPEG-2000 wavelets, and for different values of histogram weights. A classified diabetic retinopathy image database is built allowing algorithms tests. On this image database, results are promising: the retrieval efficiency is higher than 70% for some lesion types.

  15. Low-complexity Compression of High Dynamic Range Infrared Images with JPEG compatibility

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-01-01

    . Then we compress each image by a JPEG baseline encoder and include the residual image bit stream into the application part of JPEG header of the base image. As a result, the base image can be reconstructed by JPEG baseline decoder. If the JPEG bit stream size of the residual image is higher than the raw......We propose a low-complexity High Dynamic Range (HDR) infrared image (IR) coding algorithm assuming the typical case of IR images with an active range of more than 8 bit depth, but less than 16 bit depth. First, we separate an input image into base and residual images with maximum 8 bit depth each...... data size, then we include the raw residual image instead. If the residual image contains only zero values or the quality factor for it is 0 then we do not include the residual image into the header. Experimental results show that compared with JPEG-XT Part 6 with ’global Reinhard’ tone...

  16. Low-complexity Compression of High Dynamic Range Infrared Images with JPEG compatibility

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-01-01

    We propose a low-complexity High Dynamic Range (HDR) infrared image (IR) coding algorithm assuming the typical case of IR images with an active range of more than 8 bit depth, but less than 16 bit depth. First, we separate an input image into base and residual images with maximum 8 bit depth each....... Then we compress each image by a JPEG baseline encoder and include the residual image bit stream into the application part of JPEG header of the base image. As a result, the base image can be reconstructed by JPEG baseline decoder. If the JPEG bit stream size of the residual image is higher than the raw...... data size, then we include the raw residual image instead. If the residual image contains only zero values or the quality factor for it is 0 then we do not include the residual image into the header. Experimental results show that compared with JPEG-XT Part 6 with ’global Reinhard’ tone...

  17. Direct digital lateral cephalometry: the effects of JPEG compression on image quality.

    Science.gov (United States)

    Wenger, N A; Tewson, D H T K; McDonald, F

    2006-07-01

    This study used an aluminium test object to assess the effect of the Joint Photographics Expert Group (JPEG) compression algorithm, on direct digital cephalometric image quality. The aluminium block of 15 steps, with 20 holes in each step was radiographed in a Planmeca Proline 2002 digital cephalometric machine with Dimaxis2 software. Six different JPEG compression ratios were used to capture the cephalometric images. These ratios were 60%, 70%, 80%, 90%, TOP QUALITY JPEG (TQJPEG 98%) and TIFF (uncompressed). The images were taken at 68 kV and 12 mA with a 7 s exposure. Six experienced observers viewed the monitor displayed images, which were presented randomly. This was repeated one month later. The number of holes detected by each observer was plotted against each compression ratio. Intra-observer and inter-observer reproducibility was calculated using the Mann-Whitney U-test. Differences between the compression ratios were assessed using a Kruskal-Wallis one-way analysis of variance. When comparing intra-observer reproducibility, it was found that there were only four of 36 comparisons that showed a statistically significant difference (Observer 1: 60% (P=0.004), TQJPEG (P=0.019); Observer 2: TIFF (P=0.005); Observer 3: 90% (P=0.007)). Statistically, there was no significant difference with inter-observer reproducibility. There was no statistically significant difference between the image quality obtained from each compression ratio. The results showed that JPEG compression does not have any effect on the perceptibility of landmarks in the aluminium test object used in this study.

  18. Detection of copy-move image modification using JPEG compression model.

    Science.gov (United States)

    Novozámský, Adam; Šorel, Michal

    2018-02-01

    The so-called copy-move forgery, based on copying an object and pasting in another location of the same image, is a common way to manipulate image content. In this paper, we address the problem of copy-move forgery detection in JPEG images. The main problem with JPEG compression is that the same pixels, after moving to a different position and storing in the JPEG format, have different values. The majority of existing algorithms is based on matching pairs of similar patches, which generates many false matches. In many cases they cannot be eliminated by postprocessing, causing the failure of detection. To overcome this problem, we derive a JPEG-based constraint that any pair of patches must satisfy to be considered a valid candidate and propose an efficient algorithm to verify the constraint. The constraint can be integrated into most existing methods. Experiments show significant improvement of detection, especially for difficult cases, such as small objects, objects covered by textureless areas and repeated patterns. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. JPEG 2000-based compression of fringe patterns for digital holographic microscopy

    Science.gov (United States)

    Blinder, David; Bruylants, Tim; Ottevaere, Heidi; Munteanu, Adrian; Schelkens, Peter

    2014-12-01

    With the advent of modern computing and imaging technologies, digital holography is becoming widespread in various scientific disciplines such as microscopy, interferometry, surface shape measurements, vibration analysis, data encoding, and certification. Therefore, designing an efficient data representation technology is of particular importance. Off-axis holograms have very different signal properties with respect to regular imagery, because they represent a recorded interference pattern with its energy biased toward the high-frequency bands. This causes traditional images' coders, which assume an underlying 1/f2 power spectral density distribution, to perform suboptimally for this type of imagery. We propose a JPEG 2000-based codec framework that provides a generic architecture suitable for the compression of many types of off-axis holograms. This framework has a JPEG 2000 codec at its core, extended with (1) fully arbitrary wavelet decomposition styles and (2) directional wavelet transforms. Using this codec, we report significant improvements in coding performance for off-axis holography relative to the conventional JPEG 2000 standard, with Bjøntegaard delta-peak signal-to-noise ratio improvements ranging from 1.3 to 11.6 dB for lossy compression in the 0.125 to 2.00 bpp range and bit-rate reductions of up to 1.6 bpp for lossless compression.

  20. A new anti-forensic scheme--hiding the single JPEG compression trace for digital image.

    Science.gov (United States)

    Cao, Yanjun; Gao, Tiegang; Sheng, Guorui; Fan, Li; Gao, Lin

    2015-01-01

    To prevent image forgeries, a number of forensic techniques for digital image have been developed that can detect an image's origin, trace its processing history, and can also locate the position of tampering. Especially, the statistical footprint left by JPEG compression operation can be a valuable source of information for the forensic analyst, and some image forensic algorithm have been raised based on the image statistics in the DCT domain. Recently, it has been shown that footprints can be removed by adding a suitable anti-forensic dithering signal to the image in the DCT domain, this results in invalid for some image forensic algorithms. In this paper, a novel anti-forensic algorithm is proposed, which is capable of concealing the quantization artifacts that left in the single JPEG compressed image. In the scheme, a chaos-based dither is added to an image's DCT coefficients to remove such artifacts. Effectiveness of both the scheme and the loss of image quality are evaluated through the experiments. The simulation results show that the proposed anti-forensic scheme can verify the reliability of the JPEG forensic tools. © 2014 American Academy of Forensic Sciences.

  1. Evaluation of compression ratio using JPEG 2000 on diagnostic images in dentistry

    International Nuclear Information System (INIS)

    Jung, Gi Hun; Han, Won Jeong; Yoo, Dong Soo; Kim, Eun Kyung; Choi, Soon Chul

    2005-01-01

    To find out the proper compression ratios without degrading image quality and affecting lesion detectability on diagnostic images used in dentistry compressed with JPEG 2000 algorithm. Sixty Digora peri apical images, sixty panoramic computed radiographic (CR) images, sixty computed tomography (CT) images, and sixty magnetic resonance (MR) images were compressed into JPEG 2000 with ratios of 10 levels from 5:1 to 50:1. To evaluate the lesion detectability, the images were graded with 5 levels (1 : definitely absent ; 2 : probably absent ; 3 : equivocal ; 4 : probably present ; 5 : definitely present), and then receiver operating characteristic analysis was performed using the original image as a gold standard. Also to evaluate subjectively the image quality, the images were graded with 5 levels (1 : definitely unacceptable ; 2 : probably unacceptable ; 3 : equivocal ; 4 : probably acceptable ; 5 : definitely acceptable), and then paired t-test was performed. In Digora, CR panoramic and CT images, compressed images up to ratios of 15:1 showed nearly the same lesion detectability as original images, and in MR images, compressed images did up to ratios of 25:1. In Digora and CR panoramic images, compressed images up to ratios of 5:1 showed little difference between the original and reconstructed images in subjective assessment of image quality. In CT images, compressed images did up to ratios of 10:1 and in MR images up to ratios of 15:1. We considered compression ratios up to 5:1 in Digora and CR panoramic images, up to 10:1 in CT images, up to 15:1 in MR images as clinically applicable compression ratios.

  2. Context-dependent JPEG backward-compatible high-dynamic range image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  3. INCREASE OF STABILITY AT JPEG COMPRESSION OF THE DIGITAL WATERMARKS EMBEDDED IN STILL IMAGES

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-07-01

    Full Text Available Subject of Research. The paper deals with creation and research of method for increasing stability at JPEG compressing of digital watermarks embedded in still images. Method. A new algorithm of digital watermarking for still images which embeds digital watermark into a still image via modification of frequency coefficients for Hadamard discrete transformation is presented. The choice of frequency coefficients for embedding of a digital watermark is based on existence of sharp change of their values after modification at the maximum compression of JPEG. The choice of blocks of pixels for embedding is based on the value of their entropy. The new algorithm was subjected to the analysis of resistance to an image compression, noising, filtration, change of size, color and histogram equalization. Elham algorithm possessing a good resistance to JPEG compression was chosen for comparative analysis. Nine gray-scale images were selected as objects for protection. Obscurity of the distortions embedded in them was defined on the basis of the peak value of a signal to noise ratio which should be not lower than 43 dB for obscurity of the brought distortions. Resistibility of embedded watermark was determined by the Pearson correlation coefficient, which value should not be below 0.5 for the minimum allowed stability. The algorithm of computing experiment comprises: watermark embedding into each test image by the new algorithm and Elham algorithm; introducing distortions to the object of protection; extracting of embedded information with its subsequent comparison with the original. Parameters of the algorithms were chosen so as to provide approximately the same level of distortions introduced into the images. Main Results. The method of preliminary processing of digital watermark presented in the paper makes it possible to reduce significantly the volume of information embedded in the still image. The results of numerical experiment have shown that the

  4. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    Science.gov (United States)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  5. JPEG XS, a new standard for visually lossless low-latency lightweight image compression

    Science.gov (United States)

    Descampe, Antonin; Keinert, Joachim; Richter, Thomas; Fößel, Siegfried; Rouvroy, Gaël.

    2017-09-01

    JPEG XS is an upcoming standard from the JPEG Committee (formally known as ISO/IEC SC29 WG1). It aims to provide an interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support of a wide range of implementation technologies such as FPGAs, CPUs and GPUs. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. In addition to the evaluation of the visual transparency of the selected technologies, a detailed analysis of the hardware and software complexity as well as the latency has been done to make sure that the new codec meets the requirements of the above-mentioned use cases. In particular, the end-to-end latency has been constrained to a maximum of 32 lines. Concerning the hardware complexity, neither encoder nor decoder should require more than 50% of an FPGA similar to Xilinx Artix 7 or 25% of an FPGA similar to Altera Cyclon 5. This process resulted in a coding scheme made of an optional color transform, a wavelet transform, the entropy coding of the highest magnitude level of groups of coefficients, and the raw inclusion of the truncated wavelet coefficients. This paper presents the details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results.

  6. A threshold-based fixed predictor for JPEG-LS image compression

    Science.gov (United States)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.

  7. JPEG 2000 standards in digital preservation

    International Nuclear Information System (INIS)

    Clark, Richard

    2010-01-01

    Although JPEG 2000 is now a preferred format for archival image preservation, its use still requires care and consideration. This paper addresses how some of these issues can be addressed from a pragmatic perspective, and also looks at how the technology and file formats are developing to provide a full and consistent architecture for handling and searching large visual archives. New methods of addressing perceptual quality are discussed, showing that JPEG 2000 offers the best quality compression, although the metrics indicate that lower compression rates are recommended for preservation formats than might be assumed from previous simple metrics such as PSNR.

  8. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, Christopher M. [Los Alamos National Laboratory

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementation techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.

  9. On the integer coding profile of JPEG XT

    Science.gov (United States)

    Richter, Thomas

    2014-09-01

    JPEG XT (ISO/IEC 18477), the latest standardization initiative of the JPEG committee defines an image compression standard backwards compatible to the well-known JPEG standard (ISO/IEC 10918-1). JPEG XT extends JPEG by features like coding of images of higher bit-depth, coding of floating point image formats and lossless compression, all of which are backwards compatible to the legacy JPEG standard. In this work, the author presents profiles of JPEG XT that are especially suited for hardware implementations by requiring only integer logic. All functional blocks of a JPEG XT codec are here implemented by integer or fixed point logic. A performance analysis and comparison with other profiles of JPEG XT concludes the work.

  10. American College of Cardiology/ European Society of Cardiology international study of angiographic data compression phase II. The effects of varying JPEG data compression levels on the quantitative assessment of the degree of stenosis in digital coronary angiography.

    Science.gov (United States)

    Tuinenburg, J C; Koning, G; Hekking, E; Zwinderman, A H; Becker, T; Simon, R; Reiber, J H

    2000-04-01

    This report describes whether lossy Joint Photographic Experts Group (JPEG) image compression/decompression has an effect on the quantitative assessment of vessel sizes by state-of-the-art quantitative coronary arteriography (QCA). The Digital Imaging and Communications in Medicine (DICOM) digital exchange standard for angiocardiography prescribes that images must be stored loss free, thereby limiting JPEG compression to a maximum ratio of 2:1. For practical purposes it would be desirable to increase the compression ratio (CR), which would lead to lossy image compression. A series of 48 obstructed coronary segments were compressed/decompressed at CR 1:1 (uncompressed), 6:1, 10:1 and 16:1 and analyzed blindly and in random order using the QCA-CMS analytical software. Similar catheter and vessel start- and end-points were used within each image quartet, respectively. All measurements were repeated after several weeks using newly selected start- and end-points. Three different sub-analyses were carried out: the intra-observer, fixed inter-compression and variable inter-compression analyses, with increasing potential error sources, respectively. The intra-observer analysis showed significant systematic and random errors in the calibration factor at JPEG CR 10:1. The fixed inter-compression analysis demonstrated systematic errors in the calibration factor and recalculated vessel parameter results at CR 16:1 and for the random errors at CR 10:1 and 16:1. The variable inter-compression analysis presented systematic and random errors in the calibration factor and recalculated parameter results at CR 10:1 and 16:1. Any negative effect at CR 6:1 was found only for the calibration factor of the variable inter-compression analysis, which did not show up in the final vessel measurements. Compression ratios of 10:1 and 16:1 affected the QCA results negatively and therefore should not be used in clinical research studies. Copyright 2000 The European Society of Cardiology.

  11. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    Science.gov (United States)

    2015-12-24

    T P(t), (81) where T is the clock period, and for the inexact circuit the energy is Ẽ(t) = T P̃(t). (82) In digital circuits , the total power P...Y , Cb, or Cr) data X input vector to a digital logic circuit Y output of a digital logic circuit Y luminance Z quantization factor matrix zi,j (i...accuracy in digital logic circuits . The contribution of this research will be to advance the state of the art of inexact computing by optimizing the JPEG

  12. Dynamic code block size for JPEG 2000

    Science.gov (United States)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  13. Compression of Infrared images

    DEFF Research Database (Denmark)

    Mantel, Claire; Forchhammer, Søren

    2017-01-01

    This paper investigates the compression of infrared images with three codecs: JPEG2000, JPEG-XT and HEVC. Results are evaluated in terms of SNR, Mean Relative Squared Error (MRSE) and the HDR-VDP2 quality metric. JPEG2000 and HEVC perform fairy similar and better than JPEG-XT. JPEG2000 performs...

  14. An FPGA-based JPEG 2000 Demonstration Board

    OpenAIRE

    Woolston, Tom; Holt, Niel; Bingham, Gail; Wada, Glen

    2005-01-01

    The Space Dynamics Laboratory has developed a hardware-based JPEG 2000 image compression solution and packaged it in a demonstration board. The board implements both Tier1 and Tier2 JPEG 2000 encoding in two Xilinx Virtex II FPGAs. The FPGA design was built as a first step toward developing JPEG 2000 image compression hardware that could be used for remote sensing on the ground, in the air, or in Earth orbit. This board has been used to demonstrate the power and flexibility of the JPEG 2000 s...

  15. New quantization matrices for JPEG steganography

    Science.gov (United States)

    Yildiz, Yesna O.; Panetta, Karen; Agaian, Sos

    2007-04-01

    Modern steganography is a secure communication of information by embedding a secret-message within a "cover" digital multimedia without any perceptual distortion to the cover media, so the presence of the hidden message is indiscernible. Recently, the Joint Photographic Experts Group (JPEG) format attracted the attention of researchers as the main steganographic format due to the following reasons: It is the most common format for storing images, JPEG images are very abundant on the Internet bulletin boards and public Internet sites, and they are almost solely used for storing natural images. Well-known JPEG steganographic algorithms such as F5 and Model-based Steganography provide high message capacity with reasonable security. In this paper, we present a method to increase security using JPEG images as the cover medium. The key element of the method is using a new parametric key-dependent quantization matrix. This new quantization table has practically the same performance as the JPEG table as far as compression ratio and image statistics. The resulting image is indiscernible from an image that was created using the JPEG compression algorithm. This paper presents the key-dependent quantization table algorithm and then analyzes the new table performance.

  16. Lossy three-dimensional JPEG2000 compression of abdominal CT images: assessment of the visually lossless threshold and effect of compression ratio on image quality

    NARCIS (Netherlands)

    Ringl, Helmut; Schernthaner, Ruediger E.; Kulinna-Cosentini, Christiane; Weber, Michael; Schaefer-Prokop, Cornelia; Herold, Christian J.; Schima, Wolfgang

    2007-01-01

    PURPOSE: To retrospectively determine the maximum compression ratio at which compressed images are indistinguishable from the original by using a three-dimensional (3D) wavelet algorithm. MATERIALS AND METHODS: The protocol of this study was approved by the local Institutional Review Board and

  17. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  18. Visualization of JPEG Metadata

    Science.gov (United States)

    Malik Mohamad, Kamaruddin; Deris, Mustafa Mat

    There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.

  19. American College of Cardiology/European Society of Cardiolgoy International Study of Angiographic Data Compression Phase II: the effects of varying JPEG data compression levels on the quantitative assessment of the degree of stenosis in digital coronary angiography. Joint Photographic Experts Group.

    Science.gov (United States)

    Tuinenburg, J C; Koning, G; Hekking, E; Zwinderman, A H; Becker, T; Simon, R; Reiber, J H

    2000-04-01

    This report describes whether lossy Joint Photographic Experts Group (UPEG) image compression/decompression has an effect on the quantitative assessment of vessel sizes by state-of-the-art quantitative coronary arteriography (QCA). The Digital Imaging and Communications in Medicine (DICOM) digital exchange standard for angiocardiography prescribes that images must be stored loss free, thereby limiting JPEG compression to a maximum ratio of 2:1. For practical purposes it would be desirable to increase the compression ratio (CR), which would lead to lossy image compression. A series of 48 obstructed coronary segments were compressed/decompressed at CR 1:1 (uncompressed), 6:1, 10:1 and 16:1 and analyzed blindly and in random order using the QCA-CMS analytical software. Similar catheter and vessel start- and end-points were used within each image quartet, respectively. All measurements were repeated after several weeks using newly selected start- and end-points. Three different sub-analyses were carried out: the intra-observer, fixed inter-compression and variable inter-compression analyses, with increasing potential error sources, respectively. The intra-observer analysis showed significant systematic and random errors in the calibration factor at JPEG CR 10:1. The fixed inter-compression analysis demonstrated systematic errors in the calibration factor and recalculated vessel parameter results at CR 16:1 and for the random errors at CR 10:1 and 16:1. The variable inter-compression analysis presented systematic and random errors in the calibration factor and recalculated parameter results at CR 10:1 and 16:1. Any negative effect at CR 6:1 was found only for the calibration factor of the variable inter-compression analysis, which did not show up in the final vessel measurements. Compression ratios of 10:1 and 16:1 affected the QCA results negatively and therefore should not be used in clinical research studies.

  20. Image coding design considerations for cascaded encoding-decoding cycles and image editing: analysis of JPEG 1, JPEG 2000, and JPEG XR / HD Photo

    Science.gov (United States)

    Sullivan, Gary J.; Sun, Shijun; Regunathan, Shankar; Schonberg, Daniel; Tu, Chengjie; Srinivasan, Sridhar

    2008-08-01

    This paper discusses cascaded multiple encoding/decoding cycles and their effect on image quality for lossy image coding designs. Cascaded multiple encoding/decoding is an important operating scenario in professional editing industries. In such scenarios, it is common for a single image to be edited by several people while the image is compressed between editors for transit and archival. In these cases, it is important that decoding followed by re-encoding introduce minimal (or no) distortion across generations. A significant number of potential sources of distortion introduction exist in a cascade of decoding and re-encoding, especially if such processes as conversion between RGB and YUV color representations, 4:2:0 resampling, etc., are considered (and operations like spatial shifting, resizing, and changes of the quantization process or coding format). This paper highlights various aspects of distortion introduced by decoding and re-encoding, and remarks on the impact of these issues in the context of three still-image coding designs: JPEG, JPEG 2000, and JPEG XR. JPEG XR is a draft standard under development in the JPEG committee based on Microsoft technology known as HD Photo. The paper focuses particularly on the JPEG XR technology, and suggests that the design of the draft JPEG XR standard has several quite good characteristics in regard to re-encoding robustness.

  1. FBCOT: a fast block coding option for JPEG 2000

    Science.gov (United States)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  2. An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process

    Science.gov (United States)

    Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre

    2015-02-01

    This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.

  3. Low-complexity JPEG-based progressive video codec for wireless video transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Forchhammer, Søren

    2010-01-01

    This paper discusses the question of video codec enhancement for wireless video transmission of high definition video data taking into account constraints on memory and complexity. Starting from parameter adjustment for JPEG2000 compression algorithm used for wireless transmission and achieving...... the best possible results by tuning settings, this work proceeds to develop a low-complexity progressive codec based on JPEG, which is compared to the tuned JPEG2000. Comparison to H.264/SVC for this codec is also given. As the results show, our simple solution on low rates can compete with JPEG2000 and H...

  4. Overview of the JPEG XS objective evaluation procedures

    Science.gov (United States)

    Willème, Alexandre; Richter, Thomas; Rosewarne, Chris; Macq, Benoit

    2017-09-01

    JPEG XS is a standardization activity conducted by the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1 group that aims at standardizing a low-latency, lightweight and visually lossless video compression scheme. This codec is intended to be used in applications where image sequences would otherwise be transmitted or stored in uncompressed form, such as in live production (through SDI or IP transport), display links, or frame buffers. Support for compression ratios ranging from 2:1 to 6:1 allows significant bandwidth and power reduction for signal propagation. This paper describes the objective quality assessment procedures conducted as part of the JPEG XS standardization activity. Firstly, this paper discusses the objective part of the experiments that led to the technology selection during the 73th WG1 meeting in late 2016. This assessment consists of PSNR measurements after a single and multiple compression decompression cycles at various compression ratios. After this assessment phase, two proposals among the six responses to the CfP were selected and merged to form the first JPEG XS test model (XSM). Later, this paper describes the core experiments (CEs) conducted so far on the XSM. These experiments are intended to evaluate its performance in more challenging scenarios, such as insertion of picture overlays, robustness to frame editing, assess the impact of the different algorithmic choices, and also to measure the XSM performance using the HDR VDP metric.

  5. Evaluation of JPEG 2000 encoder options: human and model observer detection of variable signals in X-ray coronary angiograms.

    Science.gov (United States)

    Zhang, Yani; Pham, Binh; Eckstein, Miguel P

    2004-05-01

    Previous studies have evaluated the effect of the new still image compression standard JPEG 2000 using nontask based image quality metrics, i.e., peak-signal-to-noise-ratio (PSNR) for nonmedical images. In this paper, the effect of JPEG 2000 encoder options was investigated using the performance of human and model observers (nonprewhitening matched filter with an eye filter, square-window Hotelling, Laguerre-Gauss Hotelling and channelized Hotelling model observer) for clinically relevant visual tasks. Two tasks were investigated: the signal known exactly but variable task (SKEV) and the signal known statistically task (SKS). Test images consisted of real X-ray coronary angiograms with simulated filling defects (signals) inserted in one of the four simulated arteries. The signals varied in size and shape. Experimental results indicated that the dependence of task performance on the JPEG 2000 encoder options was similar for all model and human observers. Model observer performance in the more tractable and computationally economic SKEV task can be used to reliably estimate performance in the complex but clinically more realistic SKS task. JPEG 2000 encoder settings different from the default ones resulted in greatly improved model and human observer performance in the studied clinically relevant visual tasks using real angiography backgrounds.

  6. JPEG 2000-enabled client-server architecture for delivery and processing of MSI/HSI data

    Science.gov (United States)

    Kasner, James H.; Rajan, Sreekanth D.

    2004-08-01

    As the number of MSI/HSI data producers increase and the exploitation of this imagery matures, more users will request MSI/HSI data and products derived from MSI/HSI data. This paper presents client-server architecture concepts for the storage, processing, and delivery of MSI/HSI data and derived products in client-server architecture. A key component of this concept is the JPEG 2000 compression standard. JPEG 2000 is the first compression standard that is capable of preserving radiometric accuracy when compressing MSI/HSI data. JPEG 2000 enables client-server delivery of large data sets in which a client may select spatial and spectral regions of interest at a desired resolution and quality to facilitate rapid viewing of data. Using these attributes of JPEG 2000, we present concepts that facilitate thin-client server-side processing as well as traditional thick-client processing of MSI/HSI data.

  7. JPEG2000 Compatible Lossless Coding of Floating-Point Data

    Directory of Open Access Journals (Sweden)

    Usevitch BryanE

    2007-01-01

    Full Text Available Many scientific applications require that image data be stored in floating-point format due to the large dynamic range of the data. These applications pose a problem if the data needs to be compressed since modern image compression standards, such as JPEG2000, are only defined to operate on fixed-point or integer data. This paper proposes straightforward extensions to the JPEG2000 image compression standard which allow for the efficient coding of floating-point data. These extensions maintain desirable properties of JPEG2000, such as lossless and rate distortion optimal lossy decompression from the same coded bit stream, scalable embedded bit streams, error resilience, and implementation on low-memory hardware. Although the proposed methods can be used for both lossy and lossless compression, the discussion in this paper focuses on, and the test results are limited to, the lossless case. Test results on real image data show that the proposed lossless methods have raw compression performance that is competitive with, and sometime exceeds, current state-of-the-art methods.

  8. JPEG2000 Compatible Lossless Coding of Floating-Point Data

    Directory of Open Access Journals (Sweden)

    Bryan E. Usevitch

    2007-02-01

    Full Text Available Many scientific applications require that image data be stored in floating-point format due to the large dynamic range of the data. These applications pose a problem if the data needs to be compressed since modern image compression standards, such as JPEG2000, are only defined to operate on fixed-point or integer data. This paper proposes straightforward extensions to the JPEG2000 image compression standard which allow for the efficient coding of floating-point data. These extensions maintain desirable properties of JPEG2000, such as lossless and rate distortion optimal lossy decompression from the same coded bit stream, scalable embedded bit streams, error resilience, and implementation on low-memory hardware. Although the proposed methods can be used for both lossy and lossless compression, the discussion in this paper focuses on, and the test results are limited to, the lossless case. Test results on real image data show that the proposed lossless methods have raw compression performance that is competitive with, and sometime exceeds, current state-of-the-art methods.

  9. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... cannot be expected to code losslessly at a rate of 125 Mbit/s. We investigate the rate and quality effects of quantization using standard JPEG-LS quantization and two new techniques: visual quantization and trellis quantization. Visual quantization is not part of baseline JPEG-LS, but is applicable...... in the framework of JPEG-LS. Visual tests show that this quantization technique gives much better quality than standard JPEG-LS quantization. Trellis quantization is a process by which the original image is altered in such a way as to make lossless JPEG-LS encoding more effective. For JPEG-LS and visual...

  10. Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos

    Directory of Open Access Journals (Sweden)

    Geeta Kasana

    2016-02-01

    Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.

  11. JPEG 2000 for efficient imaging in a client/server environment

    Science.gov (United States)

    Boliek, Martin P.; Wu, Gene K.; Gormish, Michael J.

    2001-12-01

    The JPEG 2000 image compression system offers significant opportunity to improve imaging over the Internet. The JPEG 2000 standard is ideally suited to the client/server architecture of the web. With only one compressed version stored, a server can transmit an image with the resolution, quality, size, and region custom specified by an individual client. It can also serve an interactive zoom and pan client application. All of these can be achieved without server side decoding while using only minimal server computation, storage, and bandwidth. This paper discusses some of the system issues involved in Internet imaging with JPEG 2000. The choices of the client, passing of control information, and the methods a server could use to serve the client requests are presented. These issues include use of JPEG 2000 encoding and the decoding options in the standard. Also, covered are some proposed techniques that are outside the existing standards.

  12. Effect of the rate of chest compression familiarised in previous training on the depth of chest compression during metronome-guided cardiopulmonary resuscitation: a randomised crossover trial.

    Science.gov (United States)

    Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo

    2016-02-12

    To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Participants were recruited from a medical school and two paramedic schools of South Korea. 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Average compression depths were significantly different according to the rate used in training (pmetronome-guided CPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. A document image model and estimation algorithm for optimized JPEG decompression.

    Science.gov (United States)

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya; Fan, Zhigang

    2009-11-01

    The JPEG standard is one of the most prevalent image compression schemes in use today. While JPEG was designed for use with natural images, it is also widely used for the encoding of raster documents. Unfortunately, JPEG's characteristic blocking and ringing artifacts can severely degrade the quality of text and graphics in complex documents. We propose a JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings. The method works by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents. The method works by first segmenting the JPEG encoded document into regions corresponding to background, text, and picture content. The regions corresponding to text and background are then decoded using maximum a posteriori (MAP) estimation. Most importantly, the MAP reconstruction of the text regions uses a model which accounts for the spatial characteristics of text and graphics. Our experimental comparisons to the baseline JPEG decoding as well as to three other decoding schemes, demonstrate that our method substantially improves the quality of decoded images, both visually and as measured by PSNR.

  14. FPGA-based implementation for steganalysis: a JPEG-compatibility algorithm

    Science.gov (United States)

    Gutierrez-Fernandez, E.; Portela-García, M.; Lopez-Ongil, C.; Garcia-Valderas, M.

    2013-05-01

    Steganalysis is a process to detect hidden data in cover documents, like digital images, videos, audio files, etc. This is the inverse process of steganography, which is the used method to hide secret messages. The widely use of computers and network technologies make digital files very easy-to-use means for storing secret data or transmitting secret messages through the Internet. Depending on the cover medium used to embed the data, there are different steganalysis methods. In case of images, many of the steganalysis and steganographic methods are focused on JPEG image formats, since JPEG is one of the most common formats. One of the main important handicaps of steganalysis methods is the processing speed, since it is usually necessary to process huge amount of data or it can be necessary to process the on-going internet traffic in real-time. In this paper, a JPEG steganalysis system is implemented in an FPGA in order to speed-up the detection process with respect to software-based implementations and to increase the throughput. In particular, the implemented method is the JPEG-compatibility detection algorithm that is based on the fact that when a JPEG image is modified, the resulting image is incompatible with the JPEG compression process.

  15. Motion-JPEG2000 codec compensated for interlaced scanning videos.

    Science.gov (United States)

    Ishida, Takuma; Muramatsu, Shogo; Kikuchi, Hisakazu

    2005-12-01

    This paper presents an implementation scheme of Motion-JPEG2000 (MJP2) integrated with invertible deinterlacing. In previous work, we developed an invertible deinterlacing technique that suppresses the comb-tooth artifacts which are caused by field interleaving for interlaced scanning videos, and affect the quality of scalable frame-based codecs, such as MJP2. Our technique has two features, where sampling density is preserved and image quality is recovered by an inverse process. When no codec is placed between the deinterlacer and inverse process, the original video is perfectly reconstructed. Otherwise, it is almost completely recovered. We suggest an application scenario of this invertible deinterlacer for enhancing the sophisticated signal-to-noise ratio scalability in the frame-based MJP2 coding. The proposed system suppresses the comb-tooth artifacts at low bitrates, while enabling the quality recovery through its inverse process at high bitrates within the standard bitstream format. The main purpose of this paper is to present a system that yields high quality recovery for an MJP2 codec. We demonstrate that our invertible deinterlacer can be embedded into the discrete.wavelet transform employed in MJP2. As a result, the energy gain factor to control rate-distortion characteristics can be compensated for optimal compression. Simulation results show that the recovery of quality is improved by, for example, more than 2.0 dB in peak signal-to-noise ratio by applying our proposed gain compensation when decoding 8-bit grayscale Football sequence at 2.0 bpp.

  16. Parallel efficient rate control methods for JPEG 2000

    Science.gov (United States)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  17. DEVELOPING AN IMAGE PROCESSING APPLICATION THAT SUPPORTS NEW FEATURES OF JPEG2000 STANDARD

    Directory of Open Access Journals (Sweden)

    Evgin GÖÇERİ

    2007-03-01

    Full Text Available In recent years, developing technologies in multimedia brought the importance of image processing and compression. Images that are reduced in size using lossless and lossy compression techniques without degrading the quality of the image to an unacceptable level take up much less space in memory. This enables them to be sent and received over the Internet or mobile devices in much shorter time. The wavelet-based image compression standard JPEG2000 has been created by the Joint Photographic Experts Group (JPEG committee to superseding the former JPEG standard. Works on various additions to this standard are still under development. In this study, an Application has been developed in Visual C# 2005 which implies important image processing techniques such as edge detection and noise reduction. The important feature of this Application is to support JPEG2000 standard as well as supporting other image types, and the implementation does not only apply to two-dimensional images, but also to multi-dimensional images. Modern software development platforms that support image processing have also been compared and several features of the developed software have been identified.

  18. A No-Reference Sharpness Metric Based on Structured Ringing for JPEG2000 Images

    Directory of Open Access Journals (Sweden)

    Zhipeng Cao

    2014-01-01

    Full Text Available This work presents a no-reference image sharpness metric based on human blur perception for JPEG2000 compressed image. The metric mainly uses a ringing measure. And a blurring measure is used for compensation when the blur is so severe that ringing artifacts are concealed. We used the anisotropic diffusion for the preliminary ringing map and refined it by considering the property of ringing structure. The ringing detection of the proposed metric does not depend on edge detection, which is suitable for high degraded images. The characteristics of the ringing and blurring measures are analyzed and validated theoretically and experimentally. The performance of the proposed metric is tested and compared with that of some existing JPEG2000 sharpness metrics on three widely used databases. The experimental results show that the proposed metric is accurate and reliable in predicting the sharpness of JPEG2000 images.

  19. Toward privacy-preserving JPEG image retrieval

    Science.gov (United States)

    Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping

    2017-07-01

    This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.

  20. Modifying JPEG binary arithmetic codec for exploiting inter/intra-block and DCT coefficient sign redundancies.

    Science.gov (United States)

    Lakhani, Gopal

    2013-04-01

    This article presents four modifications to the JPEG arithmetic coding (JAC) algorithm, a topic not studied well before. It then compares the compression performance of the modified JPEG with JPEG XR, the latest block-based image coding standard. We first show that the bulk of inter/intra-block redundancy, caused due to the use of the block-based approach by JPEG, can be captured by applying efficient prediction coding. We propose the following modifications to JAC to take advantages of our prediction approach. 1) We code a totally different DC difference. 2) JAC tests a DCT coefficient by considering its bits in the increasing order of significance for coding the most significant bit position. It causes plenty of redundancy because JAC always begins with the zeroth bit. We modify this coding order and propose alternations to the JPEG coding procedures. 3) We predict the sign of significant DCT coefficients, a problem is not addressed from the perspective of the JPEG decoder before. 4) We reduce the number of binary tests that JAC codes to mark end-of-block. We provide experimental results for two sets of eight-bit gray images. The first set consists of nine classical test images mostly of size 512 × 512 pixels. The second set consists of 13 images of size 2000 × 3000 pixels or more. Our modifications to JAC obtain extra-ordinary amount of code reduction without adding any kind of losses. More specifically, when we quantize the images using the default quantizers, our modifications reduce the total JAC code size of the images of these two sets by about 8.9 and 10.6%, and the JPEG Huffman code size by about 16.3 and 23.4%, respectively, on the average. Gains are even higher for coarsely quantized images. Finally, we compare the modified JAC with two settings of JPEG XR, one with no block overlapping and the other with the default transform (we denote them by JXR0 and JXR1, respectively). Our results show that for the finest quality rate image coding, the modified

  1. Performance Evaluation of Data Compression Systems Applied to Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Lilian N. Faria

    2012-01-01

    Full Text Available Onboard image compression systems reduce the data storage and downlink bandwidth requirements in space missions. This paper presents an overview and evaluation of some compression algorithms suitable for remote sensing applications. Prediction-based compression systems, such as DPCM and JPEG-LS, and transform-based compression systems, such as CCSDS-IDC and JPEG-XR, were tested over twenty multispectral (5-band images from CCD optical sensor of the CBERS-2B satellite. Performance evaluation of these algorithms was conducted using both quantitative rate-distortion measurements and subjective image quality analysis. The PSNR, MSSIM, and compression ratio results plotted in charts and the SSIM maps are used for comparison of quantitative performance. Broadly speaking, the lossless JPEG-LS outperforms other lossless compression schemes, and, for lossy compression, JPEG-XR can provide lower bit rate and better tradeoff between compression ratio and image quality.

  2. Hybrid distortion function for JPEG steganography

    Science.gov (United States)

    Wang, Zichi; Zhang, Xinpeng; Yin, Zhaoxia

    2016-09-01

    A hybrid distortion function for JPEG steganography exploiting block fluctuation and quantization steps is proposed. To resist multidomain steganalysis, both spatial domain and discrete cosine transformation (DCT) domain are involved in the proposed distortion function. In spatial domain, a distortion value is allotted for each 8×8 block according to block fluctuation. In DCT domain, quantization steps are employed to allot distortion values for DCT coefficients in a block. The two elements, block distortion and quantization steps, are combined together to measure the embedding risk. By employing the syndrome trellis coding to embed secret data, the embedding changes are constrained in complex regions, where modifications are hard to be detected. When compared to current state-of-the-art steganographic methods for JPEG images, the proposed method presents less detectable artifacts.

  3. Effect of the rate of chest compression familiarised in previous training on the depth of chest compression during metronome-guided cardiopulmonary resuscitation: a randomised crossover trial

    OpenAIRE

    Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo

    2016-01-01

    Objectives To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Design Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Setting Participants were recruited from a medical school and two paramedic schools of South Korea. Participants 42 senior students of a medical school and two pa...

  4. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  5. On-demand rendering of an oblique slice through 3D volumetric data using JPEG2000 client-server framework

    Science.gov (United States)

    Joshi, Rajan L.

    2006-03-01

    In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.

  6. Enhanced JPEG2000 Quality Scalability through Block-Wise Layer Truncation

    Directory of Open Access Journals (Sweden)

    Auli-Llinas Francesc

    2010-01-01

    Full Text Available Quality scalability is an important feature of image and video coding systems. In JPEG2000, quality scalability is achieved through the use of quality layers that are formed in the encoder through rate-distortion optimization techniques. Quality layers provide optimal rate-distortion representations of the image when the codestream is transmitted and/or decoded at layer boundaries. Nonetheless, applications such as interactive image transmission, video streaming, or transcoding demand layer fragmentation. The common approach to truncate layers is to keep the initial prefix of the to-be-truncated layer, which may greatly penalize the quality of decoded images, especially when the layer allocation is inadequate. So far, only one method has been proposed in the literature providing enhanced quality scalability for compressed JPEG2000 imagery. However, that method provides quality scalability at the expense of high computational costs, which prevents its application to the aforementioned applications. This paper introduces a Block-Wise Layer Truncation (BWLT that, requiring negligible computational costs, enhances the quality scalability of compressed JPEG2000 images. The main insight behind BWLT is to dismantle and reassemble the to-be-fragmented layer by selecting the most relevant codestream segments of codeblocks within that layer. The selection process is conceived from a rate-distortion model that finely estimates rate-distortion contributions of codeblocks. Experimental results suggest that BWLT achieves near-optimal performance even when the codestream contains a single quality layer.

  7. A high-capacity steganography scheme for JPEG2000 baseline system.

    Science.gov (United States)

    Zhang, Liang; Wang, Haili; Wu, Renbiao

    2009-08-01

    Hiding capacity is very important for efficient covert communications. For JPEG2000 compressed images, it is necessary to enlarge the hiding capacity because the available redundancy is very limited. In addition, the bitstream truncation makes it difficult to hide information. In this paper, a high-capacity steganography scheme is proposed for the JPEG2000 baseline system, which uses bit-plane encoding procedure twice to solve the problem due to bitstream truncation. Moreover, embedding points and their intensity are determined in a well defined quantitative manner via redundancy evaluation to increase hiding capacity. The redundancy is measured by bit, which is different from conventional methods which adjust the embedding intensity by multiplying a visual masking factor. High volumetric data is embedded into bit-planes as low as possible to keep message integrality, but at the cost of an extra bit-plane encoding procedure and slightly changed compression ratio. The proposed method can be easily integrated into the JPEG2000 image coder, and the produced stego-bitstream can be decoded normally. Simulation shows that the proposed method is feasible, effective, and secure.

  8. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  9. Digital Image Forgery Detection Using JPEG Features and Local Noise Discrepancies

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Wide availability of image processing software makes counterfeiting become an easy and low-cost way to distort or conceal facts. Driven by great needs for valid forensic technique, many methods have been proposed to expose such forgeries. In this paper, we proposed an integrated algorithm which was able to detect two commonly used fraud practices: copy-move and splicing forgery in digital picture. To achieve this target, a special descriptor for each block was created combining the feature from JPEG block artificial grid with that from noise estimation. And forehand image quality assessment procedure reconciled these different features by setting proper weights. Experimental results showed that, compared to existing algorithms, our proposed method is effective on detecting both copy-move and splicing forgery regardless of JPEG compression ratio of the input image.

  10. Digital image forgery detection using JPEG features and local noise discrepancies.

    Science.gov (United States)

    Liu, Bo; Pun, Chi-Man; Yuan, Xiao-Chen

    2014-01-01

    Wide availability of image processing software makes counterfeiting become an easy and low-cost way to distort or conceal facts. Driven by great needs for valid forensic technique, many methods have been proposed to expose such forgeries. In this paper, we proposed an integrated algorithm which was able to detect two commonly used fraud practices: copy-move and splicing forgery in digital picture. To achieve this target, a special descriptor for each block was created combining the feature from JPEG block artificial grid with that from noise estimation. And forehand image quality assessment procedure reconciled these different features by setting proper weights. Experimental results showed that, compared to existing algorithms, our proposed method is effective on detecting both copy-move and splicing forgery regardless of JPEG compression ratio of the input image.

  11. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  12. Automated optimization of JPEG 2000 encoder options based on model observer performance for detecting variable signals in X-ray coronary angiograms.

    Science.gov (United States)

    Zhang, Yani; Pham, Binh T; Eckstein, Miguel P

    2004-04-01

    Image compression is indispensable in medical applications where inherently large volumes of digitized images are presented. JPEG 2000 has recently been proposed as a new image compression standard. The present recommendations on the choice of JPEG 2000 encoder options were based on nontask-based metrics of image quality applied to nonmedical images. We used the performance of a model observer [non-prewhitening matched filter with an eye filter (NPWE)] in a visual detection task of varying signals [signal known exactly but variable (SKEV)] in X-ray coronary angiograms to optimize JPEG 2000 encoder options through a genetic algorithm procedure. We also obtained the performance of other model observers (Hotelling, Laguerre-Gauss Hotelling, channelized-Hotelling) and human observers to evaluate the validity of the NPWE optimized JPEG 2000 encoder settings. Compared to the default JPEG 2000 encoder settings, the NPWE-optimized encoder settings improved the detection performance of humans and the other three model observers for an SKEV task. In addition, the performance also was improved for a more clinically realistic task where the signal varied from image to image but was not known a priori to observers [signal known statistically (SKS)]. The highest performance improvement for humans was at a high compression ratio (e.g., 30:1) which resulted in approximately a 75% improvement for both the SKEV and SKS tasks.

  13. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  14. MODIS/Terra Granule Level 1B RGB Jpeg image

    Data.gov (United States)

    National Aeronautics and Space Administration — The MOBRGB is a thermal composit Jpeg image product generated using parameters from Terra Level 1B Subsampled Calibrated Radiances product (MOD02SSH). For more...

  15. MODIS/Aqua Granule Level 1B RGB Jpeg image

    Data.gov (United States)

    National Aeronautics and Space Administration — The MYBRGB is a thermal composit Jpeg image product generated using parameters from Aqua Level 1B Subsampled Calibrated Radiances product (MYD02SSH). For more...

  16. Image and video compression for HDR content

    Science.gov (United States)

    Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.

    2012-10-01

    High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.

  17. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  18. Application of permutation to lossless compression of multispectral thematic mapper images

    Science.gov (United States)

    Arnavut, Ziya; Narumalani, Sunil

    1996-12-01

    The goal of data compression is to find shorter representations for any given data. In a data storage application, this s done in order to save storage space on an auxiliary device or, in the case of a communication scenario, to increase channel throughput. Because remotely sensed data require tremendous amounts of transmission and storage space, it is essential to find good algorithms that utilize the spatial and spectral characteristics of these data to compress them. A new technique is presented that uses a spectral and spatial correlation to create orderly data for the compression of multispectral remote sensing data, such as those acquired by the Landsat Thematic Mapper (TM) sensor system. The method described simply compresses one of the bands using the standard Joint Photographic Expert Group (JPEG) compression, and then orders the next band's data with respect to the previous sorting permutation. Then, the move-to-front coding technique is used to lower the source entropy before actually encoding the data. Owing to the correlation between visible bands of TM images, it was observed that this method yields tremendous gain on these brands (on an average 0.3 to 0.5 bits/pixel compared with lossless JPEG) and can be successfully used for multispectral images where the spectral distances between bands are close.

  19. Introducing djatoka: a reuse friendly, open source JPEG image server

    Energy Technology Data Exchange (ETDEWEB)

    Chute, Ryan M [Los Alamos National Laboratory; Van De Sompel, Herbert [Los Alamos National Laboratory

    2008-01-01

    The ISO-standardized JPEG 2000 image format has started to attract significant attention. Support for the format is emerging in major consumer applications, and the cultural heritage community seriously considers it a viable format for digital preservation. So far, only commercial image servers with JPEG 2000 support have been available. They come with significant license fees and typically provide the customers with limited extensibility capabilities. Here, we introduce djatoka, an open source JPEG 2000 image server with an attractive basic feature set, and extensibility under control of the community of implementers. We describe djatoka, and point at demonstrations that feature digitized images of marvelous historical manuscripts from the collections of the British Library and the University of Ghent. We also caIl upon the community to engage in further development of djatoka.

  20. Novel lossless FMRI image compression based on motion compensation and customized entropy coding.

    Science.gov (United States)

    Sanchez, Victor; Nasiopoulos, Panos; Abugharbieh, Rafeef

    2009-07-01

    We recently proposed a method for lossless compression of 4-D medical images based on the advanced video coding standard (H.264/AVC). In this paper, we present two major contributions that enhance our previous work for compression of functional MRI (fMRI) data: 1) a new multiframe motion compensation process that employs 4-D search, variable-size block matching, and bidirectional prediction; and 2) a new context-based adaptive binary arithmetic coder designed for lossless compression of the residual and motion vector data. We validate our method on real fMRI sequences of various resolutions and compare the performance to two state-of-the-art methods: 4D-JPEG2000 and H.264/AVC. Quantitative results demonstrate that our proposed technique significantly outperforms current state of the art with an average compression ratio improvement of 13%.

  1. Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced correlation filters for handheld devices

    Science.gov (United States)

    Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-02-01

    Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.

  2. Switching theory-based steganographic system for JPEG images

    Science.gov (United States)

    Cherukuri, Ravindranath C.; Agaian, Sos S.

    2007-04-01

    Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.

  3. An efficient multiple exposure image fusion in JPEG domain

    Science.gov (United States)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  4. The quest for 'diagnostically lossless' medical image compression: a comparative study of objective quality metrics for compressed medical images

    Science.gov (United States)

    Kowalik-Urbaniak, Ilona; Brunet, Dominique; Wang, Jiheng; Koff, David; Smolarski-Koff, Nadine; Vrscay, Edward R.; Wallace, Bill; Wang, Zhou

    2014-03-01

    Our study, involving a collaboration with radiologists (DK,NSK) as well as a leading international developer of medical imaging software (AGFA), is primarily concerned with improved methods of assessing the diagnostic quality of compressed medical images and the investigation of compression artifacts resulting from JPEG and JPEG2000. In this work, we compare the performances of the Structural Similarity quality measure (SSIM), MSE/PSNR, compression ratio CR and JPEG quality factor Q, based on experimental data collected in two experiments involving radiologists. An ROC and Kolmogorov-Smirnov analysis indicates that compression ratio is not always a good indicator of visual quality. Moreover, SSIM demonstrates the best performance, i.e., it provides the closest match to the radiologists' assessments. We also show that a weighted Youden index1 and curve tting method can provide SSIM and MSE thresholds for acceptable compression ratios.

  5. A Survey of Image Compression Techniques and their Performance in Noisy Environments

    National Research Council Canada - National Science Library

    Marvel, Lisa

    1997-01-01

    ... techniques - namely, fractal, wavelet, DPCM, and the most recent compression standard for still imagery, JPEG version 6. Methods for minimizing the effects of the noisy channel on algorithm performance are also considered.

  6. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  7. Performance evaluation of emerging JPEGXR compression standard for medical images

    International Nuclear Information System (INIS)

    Basit, M.A.

    2012-01-01

    Medical images require loss less compression as a small error due to lossy compression may be considered as a diagnostic error. JPEG XR is the latest image compression standard designed for variety of applications and has a support for lossy and loss less modes. This paper provides in-depth performance evaluation of latest JPEGXR with existing image coding standards for medical images using loss less compression. Various medical images are used for evaluation and ten images of each organ are tested. Performance of JPEGXR is compared with JPEG2000 and JPEGLS using mean square error, peak signal to noise ratio, mean absolute error and structural similarity index. JPEGXR shows improvement of 20.73 dB and 5.98 dB over JPEGLS and JPEG2000 respectively for various test images used in experimentation. (author)

  8. USING H.264/AVC-INTRA FOR DCT BASED SEGMENTATION DRIVEN COMPOUND IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Ebenezer Juliet

    2011-08-01

    Full Text Available This paper presents a one pass block classification algorithm for efficient coding of compound images which consists of multimedia elements like text, graphics and natural images. The objective is to minimize the loss of visual quality of text during compression by separating text information which needs high special resolution than the pictures and background. It segments computer screen images into text/graphics and picture/background classes based on DCT energy in each 4x4 block, and then compresses both text/graphics pixels and picture/background blocks by H.264/AVC with variable quantization parameter. Experimental results show that the single H.264/AVC-INTRA coder with variable quantization outperforms single coders such as JPEG, JPEG-2000 for compound images. Also the proposed method improves the PSNR value significantly than standard JPEG, JPEG-2000 and while keeping competitive compression ratios.

  9. Fingerprint Compression Based on Sparse Representation.

    Science.gov (United States)

    Shao, Guangqi; Wu, Yanping; A, Yong; Liu, Xiao; Guo, Tiande

    2014-02-01

    A new fingerprint compression algorithm based on sparse representation is introduced. Obtaining an overcomplete dictionary from a set of fingerprint patches allows us to represent them as a sparse linear combination of dictionary atoms. In the algorithm, we first construct a dictionary for predefined fingerprint image patches. For a new given fingerprint images, represent its patches according to the dictionary by computing l(0)-minimization and then quantize and encode the representation. In this paper, we consider the effect of various factors on compression results. Three groups of fingerprint images are tested. The experiments demonstrate that our algorithm is efficient compared with several competing compression techniques (JPEG, JPEG 2000, and WSQ), especially at high compression ratios. The experiments also illustrate that the proposed algorithm is robust to extract minutiae.

  10. Fast Bayesian JPEG Decompression and Denoising With Tight Frame Priors

    Czech Academy of Sciences Publication Activity Database

    Šorel, Michal; Bartoš, Michal

    2017-01-01

    Roč. 26, č. 1 (2017), s. 490-501 ISSN 1057-7149 R&D Projects: GA ČR(CZ) GA16-13830S Institutional support: RVO:67985556 Keywords : image processing * image restoration * JPEG Subject RIV: JD - Computer Applications, Robotics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 4.828, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/sorel-0471741.pdf

  11. The effect of wavelet and discrete cosine transform compression of digital radiographs on the detection of subtle proximal caries. ROC analysis.

    Science.gov (United States)

    Schulze, R K W; Richter, A; d'Hoedt, B

    2008-01-01

    The study compared diagnostic performances of 2 different image compression methods: JPEG (discrete cosine transform; Joint Photographic Experts Group compression standard) versus JPEG2000 (discrete wavelet transform), both at a compression ratio of 12:1, from the original uncompressed TIFF radiograph with respect to the detection of non-cavitated carious lesions. Therefore, 100 approximal surfaces of 50 tooth pairs were evaluated on the radiographs by 10 experienced observers using a 5-point confidence scale. Observations were carried out on a standardized viewing monitor under subdued light conditions. The proportion of diseased surfaces was balanced to approximately 50% to avoid bias. True caries status was assessed by serial ground sectioning and microscopic evaluation. A non-parametric receiver operating characteristic analysis revealed non-significant differences between the 3 image modalities, as computed from the critical ratios z not exceeding +/-2 (JPEG/JPEG2000, z = -0.0339; TIFF/JPEG2000, z = 0.251;TIFF/JPEG, z = 0.914). The mean area beneath the curve was highest for TIFF (0.604) followed by JPEG2000 (0.593) and JPEG (0.591). Both intra-rater and inter-rater agreement were significantly higher for TIFF (kappa(intra) = 0.52; kappa(inter) = 0.40) and JPEG2000 images (kappa(intra) = 0.49; kappa(inter) = 0.38) than for JPEG images (kappa(intra) = 0.33; kappa(inter) = 0.35). Our results indicate that image compression with typical compression algorithms at rates yielding storage sizes of around 50 kB is sufficient even for the challenging task of radiographic detection of non-cavitated carious approximal lesions. Copyright 2008 S. Karger AG, Basel.

  12. Parity Bit Replenishment for JPEG 2000-Based Video Streaming

    Directory of Open Access Journals (Sweden)

    François-Olivier Devaux

    2009-01-01

    Full Text Available This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.

  13. Statisically lossless image compression for CR and DR

    Science.gov (United States)

    Young, Susan S.; Whiting, Bruce R.; Foos, David H.

    1999-05-01

    This paper proposes an image compression algorithm that can improve the compression efficiency for digital projection radiographs over current lossless JPEG by utilizing a quantization companding function and a new lossless image compression standard called JPEG-LS. The companding and compression processes can also be augmented by a pre- processing step to first segment the foreground portions of the image and then substitute the foreground pixel values with a uniform code value. The quantization companding function approach is based on a theory that relates the onset of distortion to changes in the second-order statistics in an image. By choosing an appropriate companding function, the properties of the second-order statistics can be retained to within an insignificant error, and the companded image can then be lossless compressed using JPEG-LS; we call the reconstructed image statistically lossless. The approach offers a theoretical basis supporting the integrity of the compressed-reconstructed data relative to the original image, while providing a modest level of compression efficiency. This intermediate level of compression could help to increase the conform level for radiologists that do not currently utilize lossy compression and may also have benefits form a medico-legal perspective.

  14. New Trends in Multimedia Standards: MPEG4 and JPEG2000

    Directory of Open Access Journals (Sweden)

    Jie Liang

    1999-01-01

    Full Text Available The dramatic increase in both computational power, brought on by the introduction of increasingly powerful chips, and the communications bandwidth, unleashed by the introduction of cable modem and ADSL, lays a solid foundation for the take-off of multimedia applications. Standards always play an important role in multimedia applications due to the need for wide distribution of multimedia contents. Standards have long played pivotal roles in the development of multimedia equipment and contents. MPEG4 and JPEG2000 are two recent multimedia standards under development under the auspice of the International Standards Organization (ISO. These new standards introduce new technology and new features that will become enabling technology for many emerging applications. In this paper, we describe the new trends and new developments that shape these new standards, and illustrate the potential impact of these new standards on multimedia applications.

  15. SINGLE VERSUS MULTIPLE TRIAL VECTORS IN CLASSICAL DIFFERENTIAL EVOLUTION FOR OPTIMIZING THE QUANTIZATION TABLE IN JPEG BASELINE ALGORITHM

    Directory of Open Access Journals (Sweden)

    B Vinoth Kumar

    2017-07-01

    Full Text Available Quantization Table is responsible for compression / quality trade-off in baseline Joint Photographic Experts Group (JPEG algorithm and therefore it is viewed as an optimization problem. In the literature, it has been found that Classical Differential Evolution (CDE is a promising algorithm to generate the optimal quantization table. However, the searching capability of CDE could be limited due to generation of single trial vector in an iteration which in turn reduces the convergence speed. This paper studies the performance of CDE by employing multiple trial vectors in a single iteration. An extensive performance analysis has been made between CDE and CDE with multiple trial vectors in terms of Optimization process, accuracy, convergence speed and reliability. The analysis report reveals that CDE with multiple trial vectors improves the convergence speed of CDE and the same is confirmed using a statistical hypothesis test (t-test.

  16. Compressive strength and failure types of cusp replacing direct resin composite restorations in previously amalgam-filled premolars versus sound teeth

    NARCIS (Netherlands)

    Scholtanus, Johannes Durk; Zaia, John; Oezcan, Mutlu

    2017-01-01

    This study evaluated the fracture resistance of cusp replacing direct resin composite restorations (DCR) in premolars that had been previously filled with amalgam mesial-occlusal-distal (MOD) restorations and compared their fracture resistance with those made on sound dentin and intact teeth.

  17. ZPEG: a hybrid DPCM-DCT based approach for compression of Z-stack images.

    Science.gov (United States)

    Khire, Sourabh; Cooper, Lee; Park, Yuna; Carter, Alexis; Jayant, Nikil; Saltz, Joel

    2012-01-01

    Modern imaging technology permits obtaining images at varying depths along the thickness, or the Z-axis of the sample being imaged. A stack of multiple such images is called a Z-stack image. The focus capability offered by Z-stack images is critical for many digital pathology applications. A single Z-stack image may result in several hundred gigabytes of data, and needs to be compressed for archival and distribution purposes. Currently, the existing methods for compression of Z-stack images such as JPEG and JPEG 2000 compress each focal plane independently, and do not take advantage of the Z-signal redundancy. It is possible to achieve additional compression efficiency over the existing methods, by exploiting the high Z-signal correlation during image compression. In this paper, we propose a novel algorithm for compression of Z-stack images, which we term as ZPEG. ZPEG extends the popular discrete-cosine transform (DCT) based image encoder to compress Z-stack images. This is achieved by decorrelating the neighboring layers of the Z-stack image using differential pulse-code modulation (DPCM). PSNR measurements, as well as subjective evaluations by experts indicate that ZPEG can encode Z-stack images at a higher quality as compared to JPEG, JPEG 2000 and JP3D at compression ratios below 50∶1.

  18. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    and subjective results on JPEG compressed images, as well as MJPEG and H.264/AVC compressed video, indicate that the proposed algorithms employing directional and spatial fuzzy filters achieve better artifact reduction than other methods. In particular, robust improvements with H.264/AVC video have been gained...

  19. Energy Efficiency of Task Allocation for Embedded JPEG Systems

    Directory of Open Access Journals (Sweden)

    Yang-Hsin Fan

    2014-01-01

    Full Text Available Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin.

  20. Energy efficiency of task allocation for embedded JPEG systems.

    Science.gov (United States)

    Fan, Yang-Hsin; Wu, Jan-Ou; Wang, San-Fu

    2014-01-01

    Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin.

  1. JPEG2000-coded image error concealment exploiting convex sets projections.

    Science.gov (United States)

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.

  2. Compressed domain ECG biometric with two-lead features

    Science.gov (United States)

    Lee, Wan-Jou; Chang, Wen-Whei

    2016-07-01

    This study presents a new method to combine ECG biometrics with data compression within a common JPEG2000 framework. We target the two-lead ECG configuration that is routinely used in long-term heart monitoring. Incorporation of compressed-domain biometric techniques enables faster person identification as it by-passes the full decompression. Experiments on public ECG databases demonstrate the validity of the proposed method for biometric identification with high accuracies on both healthy and diseased subjects.

  3. Analysis of M-JPEG Video Over an ATM Network

    National Research Council Canada - National Science Library

    Kinney, Albert

    2001-01-01

    ... in the development of future naval information systems. This thesis analyzes the impact of compression, delay variance, and channel noise on perceived networked video quality using commercially available off-the-shelf equipment and software...

  4. Detection of Copy-move Image Modification Using JPEG Compression Model

    Czech Academy of Sciences Publication Activity Database

    Novozámský, Adam; Šorel, Michal

    2018-01-01

    Roč. 283, č. 1 (2018), s. 47-57 ISSN 0379-0738 R&D Projects: GA ČR(CZ) GA16-13830S; GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : Copy-move modification * Forgery * Image tampering * Quantization constraint set Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/novozamsky-0483329.pdf

  5. KRESKA: A compression system for small and very large images

    Science.gov (United States)

    Ohnesorge, Krystyna W.; Sennhauser, Rene

    1995-01-01

    An effective lossless compression system for grayscale images is presented using finite context variable order Markov models. A new method to accurately estimate the probability of the escape symbol is proposed. The choice of the best model order and rules for selecting context pixels are discussed. Two context precision and two symbol precision techniques to handle noisy image data with Markov models are introduced. Results indicate that finite context variable order Markov models lead to effective lossless compression systems for small and very large images. The system achieves higher compression ratios than some of the better known image compression techniques such as lossless JPEG, JBIG, or FELICS.

  6. Storing health data in JPEG: looking at exif area capacity limits.

    Science.gov (United States)

    Hiramatsu, Tatsuo; Nohara, Yasunobu; Nakashima, Naoki

    2013-01-01

    Formats for data storage in personal computers vary according to manufacturer and models for personal health-monitoring devices such as blood-pressure and body-composition meters. In contrast, the data format of images from digital cameras is unified into a JPEG format with an Exif area and is already familiar to many users. We have devised a method that can contain health data as a JPEG file. Health data is stored in the Exif area in JPEG in a HL7 format. There is, however, a capacity limit of 64 KB for the Exif area. The aim of this study is to examine how much health data can actually be stored in the Exif area. We found that even with combined data from multiple devices, it was possible to store over a month of health data in a JPEG file, and using multiple JPEG files simply overcomes this limit. We believe that this method will help people to more easily handle health data regardless of the various device modelsthey use.

  7. Fast and accurate face recognition based on image compression

    Science.gov (United States)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  8. Virtually Lossless Compression of Astrophysical Images

    Directory of Open Access Journals (Sweden)

    Stefano Baronti

    2005-09-01

    Full Text Available We describe an image compression strategy potentially capable of preserving the scientific quality of astrophysical data, simultaneously allowing a consistent bandwidth reduction to be achieved. Unlike strictly lossless techniques, by which moderate compression ratios are attainable, and conventional lossy techniques, in which the mean square error of the decoded data is globally controlled by users, near-lossless methods are capable of locally constraining the maximum absolute error, based on user's requirements. An advanced lossless/near-lossless differential pulse code modulation (DPCM scheme, recently introduced by the authors and relying on a causal spatial prediction, is adjusted to the specific characteristics of astrophysical image data (high radiometric resolution, generally low noise, etc.. The background noise is preliminarily estimated to drive the quantization stage for high quality, which is the primary concern in most of astrophysical applications. Extensive experimental results of lossless, near-lossless, and lossy compression of astrophysical images acquired by the Hubble space telescope show the advantages of the proposed method compared to standard techniques like JPEG-LS and JPEG2000. Eventually, the rationale of virtually lossless compression, that is, a noise-adjusted lossles/near-lossless compression, is highlighted and found to be in accordance with concepts well established for the astronomers' community.

  9. Low bit rates image compression via adaptive block downsampling and super resolution

    Science.gov (United States)

    Chen, Honggang; He, Xiaohai; Ma, Minglang; Qing, Linbo; Teng, Qizhi

    2016-01-01

    A low bit rates image compression framework based on adaptive block downsampling and super resolution (SR) was presented. At the encoder side, the downsampling mode and quantization mode of each 16×16 macroblock are determined adaptively using the ratio distortion optimization method, then the downsampled macroblocks are compressed by the standard JPEG. At the decoder side, the sparse representation-based SR algorithm is applied to recover full resolution macroblocks from decoded blocks. The experimental results show that the proposed framework outperforms the standard JPEG and the state-of-the-art downsampling-based compression methods in terms of both subjective and objective comparisons. Specifically, the peak signal-to-noise ratio gain of the proposed framework over JPEG reaches up to 2 to 4 dB at low bit rates, and the critical bit rate to JPEG is raised to about 2.3 bits per pixel. Moreover, the proposed framework can be extended to other block-based compression schemes.

  10. Development and assessment of compression technique for medical images using neural network. I. Assessment of lossless compression

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi

    2007-01-01

    This paper describes assessment of the lossless compression of a new efficient compression technique (JIS system) using neural network that the author and co-workers have recently developed. At first, theory is explained for encoding and decoding the data. Assessment is done on 55 images each of chest digital roentgenography, digital mammography, 64-row multi-slice CT, 1.5 Tesla MRI, positron emission tomography (PET) and digital subtraction angiography, which are lossless-compressed by the present JIS system to see the compression rate and loss. For comparison, those data are also JPEG lossless-compressed. Personal computer (PC) is an Apple MacBook Pro with configuration of Boot Camp for Windows environment. The present JIS system is found to have a more than 4 times higher efficiency than the usual compressions which compressing the file volume to only 1/11 in average, and thus to be importantly responsible to the increasing medical imaging data. (R.T.)

  11. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  12. The effect of image content on detail preservation and file size reduction in lossy compression.

    Science.gov (United States)

    Fidler, A; Skaleric, U; Likar, B

    2007-10-01

    To demonstrate the effect of image content on image detail preservation and file size reduction. The first set, containing 16 in vitro images with variable projection geometry, exposure time, bone level and number of teeth, was compressed with three compression modes: JPEG quality factor (JPQF), JPEG2000 quality factor (J2QF) and JPEG2000 compression ratio (J2CR). Image detail degradation was evaluated by local mean square error (MSE) on a standardized region of interest (ROI), containing bone. The second set, containing 105 clinical bitewings, was compressed with the same compression modes at 3 quality factors/compression ratios and local MSEs were calculated on two ROIs, containing bone and crown. For the first image set, nearly constant MSE was found for the JPQF and J2QF compression modes, while file size depended on projection geometry, exposure time, bone level and the number of teeth. In contrast, file size reduction was nearly constant for the J2CR compression mode, while MSE depended on the abovementioned factors. Similarly, for the second image set, nearly constant MSE and variation of file size reduction were found for JPQF and J2QF but not for the J2CR compression mode. All of these results were consistent for all three quality factors/compression ratios. Constant image detail preservation, crucial for diagnostic accuracy in radiology, can only be assured in QF compression mode in which the file size of the compressed image depends on the original image content. CR compression mode assures constant file size reduction, but image detail preservation depends on image content.

  13. Steganalysis of content-adaptive JPEG steganography based on Gauss partial derivative filter bank

    Science.gov (United States)

    Zhang, Yi; Liu, Fenlin; Yang, Chunfang; Luo, Xiangyang; Song, Xiaofeng; Lu, Jicang

    2017-01-01

    A steganalysis feature extraction method based on Gauss partial derivative filter bank is proposed in this paper to improve the detection performance for content-adaptive JPEG steganography. Considering that the embedding changes of content-adaptive steganographic schemes are performed in the texture and edge regions, the proposed method generates filtered images comprising rich texture and edge information using Gauss partial derivative filter bank, and histograms of absolute values of filtered subimages are extracted as steganalysis features. Gauss partial derivative filter bank can represent texture and edge information in multiple orientations with less computation load than conventional methods and prevent redundancy in different filtered images. These two properties are beneficial in the extraction of low-complexity sensitive features. The results of experiments conducted on three selected modern JPEG steganographic schemes-uniform embedding distortion, JPEG universal wavelet relative distortion, and side-informed UNIWARD-indicate that the proposed feature set is superior to the prior art feature sets-discrete cosine transform residual, phase aware rich model, and Gabor filter residual.

  14. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    Science.gov (United States)

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  15. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  16. Diagnosis of glaucoma using telemedicine--the effect of compression on the evaluation of optic nerve head cup-disc ratio.

    Science.gov (United States)

    Beauregard, D; Lewis, J; Piccolo, M; Bedell, H

    2000-01-01

    A photograph of the optic nerve head requires a lot of disk space (over 1 MByte) for storage and may require substantial bandwidth and time for transmission to a remote practitioner for a second opinion. To test whether compression degrades the image quality of the images, 302 slides were digitized at an optical resolution of 2400 pixels/inch (945 pixels/cm) and 30 bit/pixel. The images were saved both in non-compressed TIFF format and in compressed JPEG (compression ratio of 60) format. A blinded observer measured the optic nerve head cup-disc ratio for all three groups: the original slides, uncompressed TIFF and compressed JPEG images. The results showed that digital images were less accurate than slides. However, compression, even up to a ratio of 40, did not make matters worse.

  17. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  18. Data-Driven Soft Decoding of Compressed Images in Dual Transform-Pixel Domain.

    Science.gov (United States)

    Xianming Liu; Xiaolin Wu; Jiantao Zhou; Debin Zhao

    2016-04-01

    In the large body of research literature on image restoration, very few papers were concerned with compression-induced degradations, although in practice, the most common cause of image degradation is compression. This paper presents a novel approach to restoring JPEG-compressed images. The main innovation is in the approach of exploiting residual redundancies of JPEG code streams and sparsity properties of latent images. The restoration is a sparse coding process carried out jointly in the DCT and pixel domains. The prowess of the proposed approach is directly restoring DCT coefficients of the latent image to prevent the spreading of quantization errors into the pixel domain, and at the same time, using online machine-learned local spatial features to regulate the solution of the underlying inverse problem. Experimental results are encouraging and show the promise of the new approach in significantly improving the quality of DCT-coded images.

  19. Hierarchical prediction and context adaptive coding for lossless color image compression.

    Science.gov (United States)

    Kim, Seyun; Cho, Nam Ik

    2014-01-01

    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  20. Lossless compression of hyperspectral images using hybrid context prediction.

    Science.gov (United States)

    Liang, Yuan; Li, Jianping; Guo, Ke

    2012-03-26

    In this letter a new algorithm for lossless compression of hyperspectral images using hybrid context prediction is proposed. Lossless compression algorithms are typically divided into two stages, a decorrelation stage and a coding stage. The decorrelation stage supports both intraband and interband predictions. The intraband (spatial) prediction uses the median prediction model, since the median predictor is fast and efficient. The interband prediction uses hybrid context prediction. The hybrid context prediction is the combination of a linear prediction (LP) and a context prediction. Finally, the residual image of hybrid context prediction is coded by the arithmetic coding. We compare the proposed lossless compression algorithm with some of the existing algorithms for hyperspectral images such as 3D-CALIC, M-CALIC, LUT, LAIS-LUT, LUT-NN, DPCM (C-DPCM), JPEG-LS. The performance of the proposed lossless compression algorithm is evaluated. Simulation results show that our algorithm achieves high compression ratios with low complexity and computational cost.

  1. Hierarchical oriented predictions for resolution scalable lossless and near-lossless compression of CT and MRI biomedical images.

    Science.gov (United States)

    Taquet, Jonathan; Labit, Claude

    2012-05-01

    We propose a new hierarchical approach to resolution scalable lossless and near-lossless (NLS) compression. It combines the adaptability of DPCM schemes with new hierarchical oriented predictors to provide resolution scalability with better compression performances than the usual hierarchical interpolation predictor or the wavelet transform. Because the proposed hierarchical oriented prediction (HOP) is not really efficient on smooth images, we also introduce new predictors, which are dynamically optimized using a least-square criterion. Lossless compression results, which are obtained on a large-scale medical image database, are more than 4% better on CTs and 9% better on MRIs than resolution scalable JPEG-2000 (J2K) and close to nonscalable CALIC. The HOP algorithm is also well suited for NLS compression, providing an interesting rate-distortion tradeoff compared with JPEG-LS and equivalent or a better PSNR than J2K for a high bit rate on noisy (native) medical images.

  2. Compressing Binary Decision Diagrams

    DEFF Research Database (Denmark)

    Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter

    2008-01-01

    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to ......-2 bits per node. Empirical results for our compression technique are presented, including comparisons with previously introduced techniques, showing that the new technique dominate on all tested instances......The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  3. Compressing Binary Decision Diagrams

    DEFF Research Database (Denmark)

    Rune Hansen, Esben; Srinivasa Rao, S.; Tiedemann, Peter

    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to ......-2 bits per node. Empirical results for our compression technique are presented, including comparisons with previously introduced techniques, showing that the new technique dominate on all tested instances.......The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  4. Edge-Based Image Compression with Homogeneous Diffusion

    Science.gov (United States)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  5. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  6. Feature selection, statistical modeling and its applications to universal JPEG steganalyzer

    Energy Technology Data Exchange (ETDEWEB)

    Jalan, Jaikishan [Iowa State Univ., Ames, IA (United States)

    2009-01-01

    Steganalysis deals with identifying the instances of medium(s) which carry a message for communication by concealing their exisitence. This research focuses on steganalysis of JPEG images, because of its ubiquitous nature and low bandwidth requirement for storage and transmission. JPEG image steganalysis is generally addressed by representing an image with lower-dimensional features such as statistical properties, and then training a classifier on the feature set to differentiate between an innocent and stego image. Our approach is two fold: first, we propose a new feature reduction technique by applying Mahalanobis distance to rank the features for steganalysis. Many successful steganalysis algorithms use a large number of features relative to the size of the training set and suffer from a ”curse of dimensionality”: large number of feature values relative to training data size. We apply this technique to state-of-the-art steganalyzer proposed by Tom´as Pevn´y (54) to understand the feature space complexity and effectiveness of features for steganalysis. We show that using our approach, reduced-feature steganalyzers can be obtained that perform as well as the original steganalyzer. Based on our experimental observation, we then propose a new modeling technique for steganalysis by developing a Partially Ordered Markov Model (POMM) (23) to JPEG images and use its properties to train a Support Vector Machine. POMM generalizes the concept of local neighborhood directionality by using a partial order underlying the pixel locations. We show that the proposed steganalyzer outperforms a state-of-the-art steganalyzer by testing our approach with many different image databases, having a total of 20000 images. Finally, we provide a software package with a Graphical User Interface that has been developed to make this research accessible to local state forensic departments.

  7. Development of ultrasound/endoscopy PACS (picture archiving and communication system) and investigation of compression method for cine images

    Science.gov (United States)

    Osada, Masakazu; Tsukui, Hideki

    2002-09-01

    ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.

  8. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    Science.gov (United States)

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  9. High bit depth infrared image compression via low bit depth codecs

    Science.gov (United States)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  10. Issues in multiview autostereoscopic image compression

    Science.gov (United States)

    Shah, Druti; Dodgson, Neil A.

    2001-06-01

    Multi-view auto-stereoscopic images and image sequences require large amounts of space for storage and large bandwidth for transmission. High bandwidth can be tolerated for certain applications where the image source and display are close together but, for long distance or broadcast, compression of information is essential. We report on the results of our two- year investigation into multi-view image compression. We present results based on four techniques: differential pulse code modulation (DPCM), disparity estimation, three- dimensional discrete cosine transform (3D-DCT), and principal component analysis (PCA). Our work on DPCM investigated the best predictors to use for predicting a given pixel. Our results show that, for a given pixel, it is generally the nearby pixels within a view that provide better prediction than the corresponding pixel values in adjacent views. This led to investigations into disparity estimation. We use both correlation and least-square error measures to estimate disparity. Both perform equally well. Combining this with DPCM led to a novel method of encoding, which improved the compression ratios by a significant factor. The 3D-DCT has been shown to be a useful compression tool, with compression schemes based on ideas from the two-dimensional JPEG standard proving effective. An alternative to 3D-DCT is PCA. This has proved to be less effective than the other compression methods investigated.

  11. A novel strategy to access high resolution DICOM medical images based on JPEG2000 interactive protocol

    Science.gov (United States)

    Tian, Yuan; Cai, Weihua; Sun, Jianyong; Zhang, Jianguo

    2008-03-01

    The demand for sharing medical information has kept rising. However, the transmission and displaying of high resolution medical images are limited if the network has a low transmission speed or the terminal devices have limited resources. In this paper, we present an approach based on JPEG2000 Interactive Protocol (JPIP) to browse high resolution medical images in an efficient way. We designed and implemented an interactive image communication system with client/server architecture and integrated it with Picture Archiving and Communication System (PACS). In our interactive image communication system, the JPIP server works as the middleware between clients and PACS servers. Both desktop clients and wireless mobile clients can browse high resolution images stored in PACS servers via accessing the JPIP server. The client can only make simple requests which identify the resolution, quality and region of interest and download selected portions of the JPEG2000 code-stream instead of downloading and decoding the entire code-stream. After receiving a request from a client, the JPIP server downloads the requested image from the PACS server and then responds the client by sending the appropriate code-stream. We also tested the performance of the JPIP server. The JPIP server runs stably and reliably under heavy load.

  12. Research on unequal error protection with punctured turbo codes in jpeg image transmission system

    Directory of Open Access Journals (Sweden)

    Lakhdar Moulay A.

    2007-01-01

    Full Text Available An investigation of Unequal Error Protection (UEP methods applied to JPEG image transmission using turbo codes is presented. The JPEG image is partitioned into two groups, i.e., DC components and AC components according to their respective sensitivity to channel noise. The highly sensitive DC components are better protected with a lower coding rate, while the less sensitive AC components use a higher coding rate. While we use the s-random interleaver and s-random odd-even interleaver combined with odd-even puncturing, we can fix easily the local rate of turbo-code. We propose to modify the design of s-random interleaver to fix the number of parity bits. A new UEP scheme for the Soft Output Viterbi Algorithm (SOVA is also proposed to improve the performances in terms of Bit Error Rate (BER and Peak Signal to Noise Ratio (PSNR. Simulation results are given to demonstrate how the UEP schemes outperforms the equal error protection (EEP scheme in terms of BER and PSNR.

  13. Artifact reduction of compressed images and video combining adaptive fuzzy filtering and directional anisotropic diffusion

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Forchhammer, Søren; Korhonen, Jari

    2011-01-01

    and ringing artifacts, we have applied directional anisotropic diffusion. Besides that, the selection of the adaptive threshold parameter for the diffusion coefficient has also improved the performance of the algorithm. Experimental results on JPEG compressed images as well as MJPEG and H.264 compressed......Fuzzy filtering is one of the recently developed methods for reducing distortion in compressed images and video. In this paper, we combine the powerful anisotropic diffusion equations with fuzzy filtering in order to reduce the impact of artifacts. Based on the directional nature of the blocking...

  14. "Compressed" Compressed Sensing

    OpenAIRE

    Reeves, Galen; Gastpar, Michael

    2010-01-01

    The field of compressed sensing has shown that a sparse but otherwise arbitrary vector can be recovered exactly from a small number of randomly constructed linear projections (or samples). The question addressed in this paper is whether an even smaller number of samples is sufficient when there exists prior knowledge about the distribution of the unknown vector, or when only partial recovery is needed. An information-theoretic lower bound with connections to free probability theory and an upp...

  15. On-board optical image compression for future high-resolution remote sensing systems

    Science.gov (United States)

    Lambert-Nebout, Catherine; Latry, Christophe; Moury, Gilles A.; Parisot, Christophe; Antonini, Marc; Barlaud, Michel

    2000-12-01

    Future high resolution instruments planned by CNES to succeed SPOT5 will lead to higher bit rates because of the increase in both resolution and number of bits per pixel, not compensated by the reduced swatch. Data compression is then needed, with compression ratio goals higher than the 2.81 SPOT5 value obtained with a JPEG like algorithm. Compression ratio should rise typically to 4 - 6 values, with artifacts remaining unnoticeable: SPOT5 algorithm performances have clearly to be outdone. On another hand, in the framework of optimized and low cost instruments, noise level will increase. Furthermore, the Modulation Transfer Function (MTF) and the sampling grid will be fitted together, to -- at least roughly -- satisfy Shannon requirements. As with the Supermode sampling scheme of the SPOT5 Panchromatic band, the images will have to be restored (deconvolution and denoising) and that renders the compression impact assessment much more complex. This paper is a synthesis of numerous studies evaluating several data compression algorithms, some of them supposing that the adaptation between sampling grid and MTF is obtained by the quincunx Supermode scheme. The following points are analyzed: compression decorrelator (DCT, LOT, wavelet, lifting), comparison with JPEG2000 for images acquired on a square grid, compression fitting to the quincunx sampling and on board restoration (before compression) versus on ground restoration. For each of them, we describe the proposed solutions, underlining the associated complexity and comparing them from a quantitative and qualitative point of view, giving the results of experts analyses.

  16. Saliency detection in the compressed domain for adaptive image retargeting.

    Science.gov (United States)

    Fang, Yuming; Chen, Zhenzhong; Lin, Weisi; Lin, Chia-Wen

    2012-09-01

    Saliency detection plays important roles in many image processing applications, such as regions of interest extraction and image resizing. Existing saliency detection models are built in the uncompressed domain. Since most images over Internet are typically stored in the compressed domain such as joint photographic experts group (JPEG), we propose a novel saliency detection model in the compressed domain in this paper. The intensity, color, and texture features of the image are extracted from discrete cosine transform (DCT) coefficients in the JPEG bit-stream. Saliency value of each DCT block is obtained based on the Hausdorff distance calculation and feature map fusion. Based on the proposed saliency detection model, we further design an adaptive image retargeting algorithm in the compressed domain. The proposed image retargeting algorithm utilizes multioperator operation comprised of the block-based seam carving and the image scaling to resize images. A new definition of texture homogeneity is given to determine the amount of removal block-based seams. Thanks to the directly derived accurate saliency information from the compressed domain, the proposed image retargeting algorithm effectively preserves the visually important regions for images, efficiently removes the less crucial regions, and therefore significantly outperforms the relevant state-of-the-art algorithms, as demonstrated with the in-depth analysis in the extensive experiments.

  17. A Complete Image Compression Scheme Based on Overlapped Block Transform with Post-Processing

    Directory of Open Access Journals (Sweden)

    Li B

    2006-01-01

    Full Text Available A complete system was built for high-performance image compression based on overlapped block transform. Extensive simulations and comparative studies were carried out for still image compression including benchmark images (Lena and Barbara, synthetic aperture radar (SAR images, and color images. We have achieved consistently better results than three commercial products in the market (a Summus wavelet codec, a baseline JPEG codec, and a JPEG-2000 codec for most images that we used in this study. Included in the system are two post-processing techniques based on morphological and median filters for enhancing the perceptual quality of the reconstructed images. The proposed system also supports the enhancement of a small region of interest within an image, which is of interest in various applications such as target recognition and medical diagnosis

  18. Quality Evaluation and Nonuniform Compression of Geometrically Distorted Images Using the Quadtree Distortion Map

    Directory of Open Access Journals (Sweden)

    Cristina Costa

    2004-09-01

    Full Text Available The paper presents an analysis of the effects of lossy compression algorithms applied to images affected by geometrical distortion. It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel. A distortion measure named quadtree distortion map (QDM able to quantify this aspect is proposed. Furthermore, QDM is exploited to achieve adaptive compression of geometrically distorted pictures, in order to ensure a uniform quality on the final image. Tests are performed using JPEG and JPEG2000 coding standards in order to quantitatively and qualitatively assess the performance of the proposed method.

  19. The effect of compression on clinical diagnosis of glaucoma based on non-analyzed confocal scanning laser ophthalmoscopy images.

    Science.gov (United States)

    Bélair, Marie-Lyne; Fansi, Alvine Kamdeu; Descovich, Denise; Leblanc, A R; Harasymowycz, Paul

    2005-01-01

    To evaluate the effect of different image compression formats of non-analyzed Heidelberg Retina Tomography (HRT; Heidelberg Engineering, Heidelberg, Germany) images on the diagnosis of glaucoma by ophthalmologists. Thirty-three topographic and reflectance images taken with the HRT representing different levels of disease were transformed using nine different compression formats. Three independent ophthalmologists, masked as to contour line and stereometric parameters, classified the original and compressed HRT images as normal, suspected glaucoma, or glaucoma, and Kappa agreement coefficients were calculated. The Tagged Image File Format had the largest file size and the Joint Photographic Experts Group (JPEG) 2000 format had the smallest size. The highest Kappa coefficient value was 1.00 for all ophthalmologists using the Tagged Image File Format. Kappa values for JPEG formats were all in the range of good to excellent agreement. Kappa values were lower for Portable Network Graphic and Graphics Interchange Format compression formats. Image compression with JPEG 2000 at a ratio of 20:1 provided sufficient quality for glaucoma analysis in conjunction with a relatively small image size format, and may prove to be attractive for HRT telemedicine applications. Further clinical studies validating the usefulness of interpreting non-analyzed HRT images are required.

  20. Sequential Principal Component Analysis -An Optimal and Hardware-Implementable Transform for Image Compression

    Science.gov (United States)

    Duong, Tuan A.; Duong, Vu A.

    2009-01-01

    This paper presents the JPL-developed Sequential Principal Component Analysis (SPCA) algorithm for feature extraction / image compression, based on "dominant-term selection" unsupervised learning technique that requires an order-of-magnitude lesser computation and has simpler architecture compared to the state of the art gradient-descent techniques. This algorithm is inherently amenable to a compact, low power and high speed VLSI hardware embodiment. The paper compares the lossless image compression performance of the JPL's SPCA algorithm with the state of the art JPEG2000, widely used due to its simplified hardware implementability. JPEG2000 is not an optimal data compression technique because of its fixed transform characteristics, regardless of its data structure. On the other hand, conventional Principal Component Analysis based transform (PCA-transform) is a data-dependent-structure transform. However, it is not easy to implement the PCA in compact VLSI hardware, due to its highly computational and architectural complexity. In contrast, the JPL's "dominant-term selection" SPCA algorithm allows, for the first time, a compact, low-power hardware implementation of the powerful PCA algorithm. This paper presents a direct comparison of the JPL's SPCA versus JPEG2000, incorporating the Huffman and arithmetic coding for completeness of the data compression operation. The simulation results show that JPL's SPCA algorithm is superior as an optimal data-dependent-transform over the state of the art JPEG2000. When implemented in hardware, this technique is projected to be ideally suited to future NASA missions for autonomous on-board image data processing to improve the bandwidth of communication.

  1. Iris Recognition: The Consequences of Image Compression

    Directory of Open Access Journals (Sweden)

    Bishop DanielA

    2010-01-01

    Full Text Available Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  2. JPEG 2000 Compression of Direct Digital Images: Effects on the Detection of Periapical Radiolucencies and Perceived Image Quality

    National Research Council Canada - National Science Library

    Massey, Mark

    2003-01-01

    ...) has been introduced to replace or augment radiographic film. Studies comparing diagnostic accuracy of DDI and conventional film for detection of artificially created periapical lesions have revealed comparable results...

  3. Building a Steganography Program Including How to Load, Process, and Save JPEG and PNG Files in Java

    Science.gov (United States)

    Courtney, Mary F.; Stix, Allen

    2006-01-01

    Instructors teaching beginning programming classes are often interested in exercises that involve processing photographs (i.e., files stored as .jpeg). They may wish to offer activities such as color inversion, the color manipulation effects archived with pixel thresholding, or steganography, all of which Stevenson et al. [4] assert are sought by…

  4. Joint compression-segmentation of functional MRI data sets

    Science.gov (United States)

    Zhang, Ning; Wu, Mo; Forchhammer, Soren; Wu, Xiaolin

    2005-04-01

    Functional Magnetic Resonance Imaging (fMRI) data sets are four dimensional (4D) and very large in size. Compression can enhance system performance in terms of storage and transmission capacities. Two approaches are investigated: adaptive DPCM and integer wavelets. In the DPCM approach, each voxel is coded as a 1D signal in time. Due to the spatial coherence of human anatomy and the similarities in responses of a given substance to stimuli, we classify the voxels by quantizing autoregressive coefficients of the associated time sequences. The resulting 2D classification map is sent as side information. Each voxel time sequence is DPCM coded using a quantized autoregressive model. The prediction residuals are coded by simple Rice coding for high decoder throughput. In the wavelet approach, the 4D fMRI data set is mapped to a 3D data set, with the 3D volume at each time instance being laid out into a 2D plane as a slice mosaic. 3D integer wavelet packets are used for lossless compression of fMRI data. The wavelet coefficients are compressed by 3D context-based adaptive arithmetic coding. An object-oriented compression mode is also introduced in the wavelet codec. An elliptic mask combined with the classification of the background is used to segment the regions of interest from the background. Significantly higher lossless compression of 4D fMRI than JPEG 2000 and JPEG-LS is achieved by both methods. The 2D classification map for compression can also be used for image segmentation in 3D space for analysis and recognition purposes. This segmentation supports object-based random access to very large 4D data volumes. The time sequence of DPCM prediction residuals can be analyzed to yield information on the responses of the imaged anatomy to the stimuli. The proposed wavelet method provides an object-oriented progressive (lossy to lossless) compression of 4D fMRI data set.

  5. Multiple descriptions based on multirate coding for JPEG 2000 and H.264/AVC.

    Science.gov (United States)

    Tillo, Tammam; Baccaglini, Enrico; Olmo, Gabriella

    2010-07-01

    Multiple description coding (MDC) makes use of redundant representations of multimedia data to achieve resiliency. Descriptions should be generated so that the quality obtained when decoding a subset of them only depends on their number and not on the particular received subset. In this paper, we propose a method based on the principle of encoding the source at several rates, and properly blending the data encoded at different rates to generate the descriptions. The aim is to achieve efficient redundancy exploitation, and easy adaptation to different network scenarios by means of fine tuning of the encoder parameters. We apply this principle to both JPEG 2000 images and H.264/AVC video data. We consider as the reference scenario the distribution of contents on application-layer overlays with multiple-tree topology. The experimental results reveal that our method favorably compares with state-of-art MDC techniques.

  6. Compression through decomposition into browse and residual images

    Science.gov (United States)

    Novik, Dmitry A.; Tilton, James C.; Manohar, M.

    1993-01-01

    Economical archival and retrieval of image data is becoming increasingly important considering the unprecedented data volumes expected from the Earth Observing System (EOS) instruments. For cost effective browsing the image data (possibly from remote site), and retrieving the original image data from the data archive, we suggest an integrated image browse and data archive system employing incremental transmission. We produce our browse image data with the JPEG/DCT lossy compression approach. Image residual data is then obtained by taking the pixel by pixel differences between the original data and the browse image data. We then code the residual data with a form of variable length coding called diagonal coding. In our experiments, the JPEG/DCT is used at different quality factors (Q) to generate the browse and residual data. The algorithm has been tested on band 4 of two Thematic mapper (TM) data sets. The best overall compression ratios (of about 1.7) were obtained when a quality factor of Q=50 was used to produce browse data at a compression ratio of 10 to 11. At this quality factor the browse image data has virtually no visible distortions for the images tested.

  7. Compression-Based Compressed Sensing

    OpenAIRE

    Rezagah, Farideh Ebrahim; Jalali, Shirin; Erkip, Elza; Poor, H. Vincent

    2016-01-01

    Modern compression algorithms exploit complex structures that are present in signals to describe them very efficiently. On the other hand, the field of compressed sensing is built upon the observation that "structured" signals can be recovered from their under-determined set of linear projections. Currently, there is a large gap between the complexity of the structures studied in the area of compressed sensing and those employed by the state-of-the-art compression codes. Recent results in the...

  8. Compression embedding

    Science.gov (United States)

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  9. High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS

    Science.gov (United States)

    Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian

    2017-09-01

    In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.

  10. Blind steganalysis method for JPEG steganography combined with the semisupervised learning and soft margin support vector machine

    Science.gov (United States)

    Dong, Yu; Zhang, Tao; Xi, Ling

    2015-01-01

    Stego images embedded by unknown steganographic algorithms currently may not be detected by using steganalysis detectors based on binary classifier. However, it is difficult to obtain high detection accuracy by using universal steganalysis based on one-class classifier. For solving this problem, a blind detection method for JPEG steganography was proposed from the perspective of information theory. The proposed method combined the semisupervised learning and soft margin support vector machine with steganalysis detector based on one-class classifier to utilize the information in test data for improving detection performance. Reliable blind detection for JPEG steganography was realized only using cover images for training. The experimental results show that the proposed method can contribute to improving the detection accuracy of steganalysis detector based on one-class classifier and has good robustness under different source mismatch conditions.

  11. Near-lossless compression of medical images through entropy-coded DPCM.

    Science.gov (United States)

    Chen, K; Ramabadran, T V

    1994-01-01

    The near-lossless, i.e., lossy but high-fidelity, compression of medical Images using the entropy-coded DPCM method is investigated. A source model with multiple contexts and arithmetic coding are used to enhance the compression performance of the method. In implementing the method, two different quantizers each with a large number of quantization levels are considered. Experiments involving several MR (magnetic resonance) and US (ultrasound) images show that the entropy-coded DPCM method can provide compression in the range from 4 to 10 with a peak SNR of about 50 dB for 8-bit medical images. The use of multiple contexts is found to improve the compression performance by about 25% to 30% for MR images and 30% to 35% for US images. A comparison with the JPEG standard reveals that the entropy-coded DPCM method can provide about 7 to 8 dB higher SNR for the same compression performance.

  12. Mutual Image-Based Authentication Framework with JPEG2000 in Wireless Environment

    Directory of Open Access Journals (Sweden)

    Ginesu G

    2006-01-01

    Full Text Available Currently, together with the development of wireless connectivity, the need for a reliable and user-friendly authentication system becomes always more important. New applications, as e-commerce or home banking, require a strong level of protection, allowing for verification of legitimate users' identity and enabling the user to distinguis trusted servers from shadow ones. A novel framework for image-based authentication (IBA is then proposed and evaluated. In order to provide mutual authentication, the proposed method integrates an IBA password technique with a challenge-response scheme based on a shared secret key for image scrambling. The wireless environment is mainly addressed by the proposed system, which tries to overcome the severe constraints on security, data transmission capability, and user friendliness imposed by such environment. In order to achieve such results, the system offers a strong solution for authentication, taking into account usability and avoiding the need for hardware upgrades. Data and application scalability is provided through the JPEG2000 standard and JPIP framework.

  13. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    Science.gov (United States)

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  14. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2012-01-01

    Full Text Available An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  15. About a method for compressing x-ray computed microtomography data

    Science.gov (United States)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  16. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  17. Scanned document compression using block-based hybrid video codec.

    Science.gov (United States)

    Zaghetto, Alexandre; de Queiroz, Ricardo L

    2013-06-01

    This paper proposes a hybrid pattern matching/transform-based compression method for scanned documents. The idea is to use regular video interframe prediction as a pattern matching algorithm that can be applied to document coding. We show that this interpretation may generate residual data that can be efficiently compressed by a transform-based encoder. The efficiency of this approach is demonstrated using H.264/advanced video coding (AVC) as a high-quality single and multipage document compressor. The proposed method, called advanced document coding (ADC), uses segments of the originally independent scanned pages of a document to create a video sequence, which is then encoded through regular H.264/AVC. The encoding performance is unrivaled. Results show that ADC outperforms AVC-I (H.264/AVC operating in pure intramode) and JPEG2000 by up to 2.7 and 6.2 dB, respectively. Superior subjective quality is also achieved.

  18. Robust CCSDS Image Data to JPEG2K Transcoding, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Images from space satellites, whether deep space or planetary, are often compressed in a lossy manner to ease transmission requirements such as power, error-rate,...

  19. Robust CCSDS Image Data to JPEG2K Transcoding, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Images from space satellites are often compressed in a lossy manner to ease transmission requirements such as power, error-rate, and data stream size. These...

  20. Recursive Compressed Sensing

    OpenAIRE

    Freris, Nikolaos M.; Öçal, Orhan; Vetterli, Martin

    2013-01-01

    We introduce a recursive algorithm for performing compressed sensing on streaming data. The approach consists of a) recursive encoding, where we sample the input stream via overlapping windowing and make use of the previous measurement in obtaining the next one, and b) recursive decoding, where the signal estimate from the previous window is utilized in order to achieve faster convergence in an iterative optimization scheme applied to decode the new one. To remove estimation bias, a two-step ...

  1. Compressed Counting Meets Compressed Sensing

    OpenAIRE

    Li, Ping; Zhang, Cun-Hui; Zhang, Tong

    2013-01-01

    Compressed sensing (sparse signal recovery) has been a popular and important research topic in recent years. By observing that natural signals are often nonnegative, we propose a new framework for nonnegative signal recovery using Compressed Counting (CC). CC is a technique built on maximally-skewed p-stable random projections originally developed for data stream computations. Our recovery procedure is computationally very efficient in that it requires only one linear scan of the coordinates....

  2. PREVIOUS SECOND TRIMESTER ABORTION

    African Journals Online (AJOL)

    PNLC

    PREVIOUS SECOND TRIMESTER ABORTION: A risk factor for third trimester uterine rupture in three ... for accurate diagnosis of uterine rupture. KEY WORDS: Induced second trimester abortion - Previous uterine surgery - Uterine rupture. ..... scarred uterus during second trimester misoprostol- induced labour for a missed ...

  3. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    Science.gov (United States)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  4. [Interference hyperspectral data compression based on spectral classification and local DPCM].

    Science.gov (United States)

    Tu, Xiao-Long; Huang, Min; Lü, Qun-Bo; Wang, Jian-Wei; Pei, Lin-Lin

    2013-05-01

    In order to get a high compression ratio, according to the spatial dimension correlation and the interference spectral dimension correlation of interference hyperspectral image data, the present article provides a new compression algorithm that combines spectral classification with local DPCM. This algorithm requires spectral classification for the whole interference hyperspectral image to get a classification number matrix corresponding to the two-dimensional space and a spectral classification library corresponding to the interference spectra first, then local DPCM is performed for the spectral classification library to get a further compression. As the first step of the compression, the spectral classification is very important to the compression effect. This article analyzes the differences of compression effect with different standard and different accuracy of classification, the relative Euclidean distance standard is better than the angle standard and the interference RQE standard. Finally, this article chooses an appropriate standard of compression and achieves the combined compression algorithm with programming. Compared to JPEG2000, the compression effect of combined compression algorithm is better.

  5. Compressive beamforming

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Mosegaard, Klaus

    2014-01-01

    Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex...

  6. Assessing diabetic retinopathy using two-field digital photography and the influence of JPEG-comoression

    NARCIS (Netherlands)

    Stellingwerf, C; Hardus, PLLJ; Hooymans, JMM

    Objective: To study the effectiveness of two digital 50degrees photographic fields per eye, stored compressed or integrally, in the grading of diabetic retinopathy, in comparison to 35-mm colour slides. Subjects and methods: Two-field digital non-stereoscopic retinal photographs and two-field 35-mm

  7. Image Compression using Haar and Modified Haar Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Mohannad Abid Shehab Ahmed

    2013-04-01

    Full Text Available Efficient image compression approaches can provide the best solutions to the recent growth of the data intensive and multimedia based applications. As presented in many papers the Haar matrix–based methods and wavelet analysis can be used in various areas of image processing such as edge detection, preserving, smoothing or filtering. In this paper, color image compression analysis and synthesis based on Haar and modified Haar is presented. The standard Haar wavelet transformation with N=2 is composed of a sequence of low-pass and high-pass filters, known as a filter bank, the vertical and horizontal Haar filters are composed to construct four 2-dimensional filters, such filters applied directly to the image to speed up the implementation of the Haar wavelet transform. Modified Haar technique is studied and implemented for odd based numbers i.e. (N=3 & N=5 to generate many solution sets, these sets are tested using the energy function or numerical method to get the optimum one.The Haar transform is simple, efficient in memory usage due to high zero value spread (it can use sparse principle, and exactly reversible without the edge effects as compared to DCT (Discrete Cosine Transform. The implemented Matlab simulation results prove the effectiveness of DWT (Discrete Wave Transform algorithms based on Haar and Modified Haar techniques in attaining an efficient compression ratio (C.R, achieving higher peak signal to noise ratio (PSNR, and the resulting images are of much smoother as compared to standard JPEG especially for high C.R. A comparison between standard JPEG, Haar, and Modified Haar techniques is done finally which approves the highest capability of Modified Haar between others.

  8. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  9. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  10. Multispectral image compression algorithm based on spectral clustering and wavelet transform

    Science.gov (United States)

    Huang, Rong; Qiao, Weidong; Yang, Jianfeng; Wang, Hong; Xue, Bin; Tao, Jinyou

    2017-11-01

    In this paper, a method based on spectral clustering and the discrete wavelet transform (DWT) is proposed, which is based on the problem of the high degree of space-time redundancy in the current multispectral image compression algorithm. First, the spectral images are grouped by spectral clustering methods, and the clusters of similar heights are grouped together to remove the redundancy of the spectra. Then, wavelet transform and coding of the class representative are performed, and the space redundancy is eliminated, and the difference composition is applied to the Karhunen-Loeve transform (KLT) and wavelet transform. Experimental results show that with JPEG2000 and upon KLT + DWT algorithm, compared with the method has better peak signal-to-noise ratio and compression ratio, and it is suitable for compression of different spectral bands.

  11. Compressed sensing for body MRI.

    Science.gov (United States)

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2017-04-01

    The introduction of compressed sensing for increasing imaging speed in magnetic resonance imaging (MRI) has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This article presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and nonlinear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the article discusses current challenges and future opportunities. 5 J. Magn. Reson. Imaging 2017;45:966-987. © 2016 International Society for Magnetic Resonance in Medicine.

  12. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    Science.gov (United States)

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  13. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Rung-Ching Chen

    2009-01-01

    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  14. Computed Quality Assessment of MPEG4-compressed DICOM Video Data.

    Science.gov (United States)

    Frankewitsch, Thomas; Söhnlein, Sven; Müller, Marcel; Prokosch, Hans-Ulrich

    2005-01-01

    Digital Imaging and Communication in Medicine (DICOM) has become one of the most popular standards in medicine. This standard specifies the exact procedures in which digital images are exchanged between devices, either using a network or storage medium. Sources for images vary; therefore there exist definitions for the exchange for CR, CT, NMR, angiography, sonography and so on. With its spreading, with the increasing amount of sources included, data volume is increasing, too. This affects storage and traffic. While for long-time storage data compression is generally not accepted at the moment, there are many situations where data compression is possible: Telemedicine for educational purposes (e.g. students at home using low speed internet connections), presentations with standard-resolution video projectors, or even the supply on wards combined receiving written findings. DICOM comprises compression: for still image there is JPEG, for video MPEG-2 is adopted. Within the last years MPEG-2 has been evolved to MPEG-4, which squeezes data even better, but the risk of significant errors increases, too. Within the last years effects of compression have been analyzed for entertainment movies, but these are not comparable to videos of physical examinations (e.g. echocardiography). In medical videos an individual image plays a more important role. Erroneous single images affect total quality even more. Additionally, the effect of compression can not be generalized from one test series to all videos. The result depends strongly on the source. Some investigations have been presented, where different MPEG-4 algorithms compressed videos have been compared and rated manually. But they describe only the results in an elected testbed. In this paper some methods derived from video rating are presented and discussed for an automatically created quality control for the compression of medical videos, primary stored in DICOM containers.

  15. A Comparative Study of Compression Methods and the Development of CODEC Program of Biological Signal for Emergency Telemedicine Service

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, T.S.; Kim, J.S. [Changwon National University, Changwon (Korea); Lim, Y.H. [Visionite Co., Ltd., Seoul (Korea); Yoo, S.K. [Yonsei University, Seoul (Korea)

    2003-05-01

    In an emergency telemedicine system such as the High-quality Multimedia based Real-time Emergency Telemedicine(HMRET) service, it is very important to examine the status of the patient continuously using the multimedia data including the biological signals(ECG, BP, Respiration, S{sub p}O{sub 2}) of the patient. In order to transmit these data real time through the communication means which have the limited transmission capacity, it is also necessary to compress the biological data besides other multimedia data. For this purpose, we investigate and compare the ECG compression techniques in the time domain and in the wavelet transform domain, and present an effective lossless compression method of the biological signals using JPEG Huffman table for an emergency telemedicine system. And, for the HMRET service, we developed the lossless compression and reconstruction program of the biological signals in MSVC++ 6.0 using DPCM method and JPEG Huffman table, and tested in an internet environment. (author). 15 refs., 17 figs., 7 tabs.

  16. Research on lossless compression of true color RGB image with low time and space complexity

    Science.gov (United States)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  17. Compression limits in cascaded quadratic soliton compression

    DEFF Research Database (Denmark)

    Bache, Morten; Bang, Ole; Krolikowski, Wieslaw

    2008-01-01

    Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency.......Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency....

  18. Spectral Distortion in Lossy Compression of Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Bruno Aiazzi

    2012-01-01

    Full Text Available Distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is investigated, with the aim of minimizing the spectral distortion between original and decompressed data. The absolute angular error, or spectral angle mapper (SAM, is used to quantify spectral distortion, while radiometric distortions are measured by maximum absolute deviation (MAD for near-lossless methods, for example, differential pulse code modulation (DPCM, or mean-squared error (MSE for lossy methods, for example, spectral decorrelation followed by JPEG 2000. Two strategies of interband distortion allocation are compared: given a target average bit rate, distortion may be set to be constant with wavelength. Otherwise, it may be allocated proportionally to the noise level of each band, according to the virtually lossless protocol. Comparisons with the uncompressed originals show that the average SAM of radiance spectra is minimized by constant distortion allocation to radiance data. However, variable distortion allocation according to the virtually lossless protocol yields significantly lower SAM in case of reflectance spectra obtained from compressed radiance data, if compared with the constant distortion allocation at the same compression ratio.

  19. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    J. Soraghan

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by “blowing out” the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  20. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    Salleh MFM

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  1. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...

  2. COMPARATIVE ANALYSIS OF THE COMPRESSIVE STRENGTH ...

    African Journals Online (AJOL)

    Previous analysis showed that cavity size and number on one hand and combinations thickness affect the compressive strength of hollow sandcrete blocks. Series arrangement of the cavities is common but parallel arrangement has been recommended. This research performed a comparative analysis of the compressive ...

  3. HVS scheme for DICOM image compression: Design and comparative performance evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Prabhakar, B. [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)]. E-mail: prabhakarb@iitm.ac.in; Reddy, M. Ramasubba [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)

    2007-07-15

    Advanced digital imaging technology in medical domain demands efficient and effective DICOM image compression for progressive image transmission and picture archival. Here a compression system, which incorporates sensitivities of HVS coded with SPIHT quantization, is discussed. The weighting factors derived from luminance CSF are used to transform the wavelet subband coefficients to reflect characteristics of HVS in best possible manner. Mannos et al. and Daly HVS models have been used and results are compared. To evaluate the performance, Eskicioglu chart metric is considered. Experiment is done on both Monochrome and Color Dicom images of MRI, CT, OT, and CR, natural and benchmark images. Reconstructed image through our technique showed improvement in visual quality and Eskicioglu chart metric at same compression ratios. Also the Daly HVS model based compression shows better performance perceptually and quantitatively when compared to Mannos et el. model. Further 'bior4.4' wavelet filter provides better results than 'db9' filter for this compression system. Results give strong evidence that under common boundary conditions; our technique achieves competitive visual quality, compression ratio and coding/decoding time, when compared with jpeg2000 (kakadu)

  4. An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF

    Directory of Open Access Journals (Sweden)

    Shikang Kong

    2017-02-01

    Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.

  5. Region segmentation techniques for object-based image compression: a review

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.

    2004-10-01

    Image compression based on transform coding appears to be approaching an asymptotic bit rate limit for application-specific distortion levels. However, a new compression technology, called object-based compression (OBC) promises improved rate-distortion performance at higher compression ratios. OBC involves segmentation of image regions, followed by efficient encoding of each region"s content and boundary. Advantages of OBC include efficient representation of commonly occurring textures and shapes in terms of pointers into a compact codebook of region contents and boundary primitives. This facilitates fast decompression via substitution, at the cost of codebook search in the compression step. Segmentation cose and error are significant disadvantages in current OBC implementations. Several innovative techniques have been developed for region segmentation, including (a) moment-based analysis, (b) texture representation in terms of a syntactic grammar, and (c) transform coding approaches such as wavelet based compression used in MPEG-7 or JPEG-2000. Region-based characterization with variance templates is better understood, but lacks the locality of wavelet representations. In practice, tradeoffs are made between representational fidelity, computational cost, and storage requirement. This paper overviews current techniques for automatic region segmentation and representation, especially those that employ wavelet classification and region growing techniques. Implementational discussion focuses on complexity measures and performance metrics such as segmentation error and computational cost.

  6. Atomic effect algebras with compression bases

    International Nuclear Information System (INIS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  7. Lossless Compression of Video using Motion Compensation

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    Summary form only given. We investigate lossless coding of video using predictive coding and motion compensation. The new coding methods combine state-of-the-art lossless techniques as JPEG (context based prediction and bias cancellation, Golomb coding), with high resolution motion field estimati......, 3D predictors, prediction using one or multiple (k) previous images, predictor dependent error modelling, and selection of motion field by code length. We treat the problem of precision of the motion field as one of choosing among a number of predictors. This way, we can incorporate 3D......-predictors and intra-frame predictors as well. As proposed by Ribas-Corbera (see PhD thesis, University of Michigan, 1996), we use bi-linear interpolation in order to achieve sub-pixel precision of the motion field. Using more reference images is another way of achieving higher accuracy of the match. The motion...... information is coded with the same algorithm as is used for the data. For slow pan or slow zoom sequences, coding methods that use multiple previous images perform up to 20% better than motion compensation using a single previous image and up to 40% better than coding that does not utilize motion compensation...

  8. Compression of multispectral fluorescence microscopic images based on a modified set partitioning in hierarchal trees

    Science.gov (United States)

    Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek

    2009-02-01

    Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands

  9. Compressing pathology whole-slide images using a human and model observer evaluation

    Directory of Open Access Journals (Sweden)

    Elizabeth A Krupinski

    2012-01-01

    Full Text Available Introduction: We aim to determine to what degree whole-slide images (WSI can be compressed without impacting the ability of the pathologist to distinguish benign from malignant tissues. An underlying goal is to demonstrate the utility of a visual discrimination model (VDM for predicting observer performance. Materials and Methods: A total of 100 regions of interest (ROIs from a breast biopsy whole-slide images at five levels of JPEG 2000 compression (8:1, 16:1, 32:1, 64:1, and 128:1 plus the uncompressed version were shown to six pathologists to determine benign versus malignant status. Results: There was a significant decrease in performance as a function of compression ratio (F = 14.58, P < 0.0001. The visibility of compression artifacts in the test images was predicted using a VDM. Just-noticeable difference (JND metrics were computed for each image, including the mean, median, ≥90th percentiles, and maximum values. For comparison, PSNR (peak signal-to-noise ratio and Structural Similarity (SSIM were also computed. Image distortion metrics were computed as a function of compression ratio and averaged across test images. All of the JND metrics were found to be highly correlated and differed primarily in magnitude. Both PSNR and SSIM decreased with bit rate, correctly reflecting a loss of image fidelity with increasing compression. Observer performance as measured by the Receiver Operating Characteristic area under the curve (ROC Az was nearly constant up to a compression ratio of 32:1, then decreased significantly for 64:1 and 128:1 compression levels. The initial decline in Az occurred around a mean JND of 3, Minkowski JND of 4, and 99th percentile JND of 6.5. Conclusion: Whole-slide images may be compressible to relatively high levels before impacting WSI interpretation performance. The VDM metrics correlated well with artifact conspicuity and human performance.

  10. Compressing pathology whole-slide images using a human and model observer evaluation

    Science.gov (United States)

    Krupinski, Elizabeth A.; Johnson, Jeffrey P.; Jaw, Stacey; Graham, Anna R.; Weinstein, Ronald S.

    2012-01-01

    Introduction: We aim to determine to what degree whole-slide images (WSI) can be compressed without impacting the ability of the pathologist to distinguish benign from malignant tissues. An underlying goal is to demonstrate the utility of a visual discrimination model (VDM) for predicting observer performance. Materials and Methods: A total of 100 regions of interest (ROIs) from a breast biopsy whole-slide images at five levels of JPEG 2000 compression (8:1, 16:1, 32:1, 64:1, and 128:1) plus the uncompressed version were shown to six pathologists to determine benign versus malignant status. Results: There was a significant decrease in performance as a function of compression ratio (F = 14.58, P < 0.0001). The visibility of compression artifacts in the test images was predicted using a VDM. Just-noticeable difference (JND) metrics were computed for each image, including the mean, median, ≥90th percentiles, and maximum values. For comparison, PSNR (peak signal-to-noise ratio) and Structural Similarity (SSIM) were also computed. Image distortion metrics were computed as a function of compression ratio and averaged across test images. All of the JND metrics were found to be highly correlated and differed primarily in magnitude. Both PSNR and SSIM decreased with bit rate, correctly reflecting a loss of image fidelity with increasing compression. Observer performance as measured by the Receiver Operating Characteristic area under the curve (ROC Az) was nearly constant up to a compression ratio of 32:1, then decreased significantly for 64:1 and 128:1 compression levels. The initial decline in Az occurred around a mean JND of 3, Minkowski JND of 4, and 99th percentile JND of 6.5. Conclusion: Whole-slide images may be compressible to relatively high levels before impacting WSI interpretation performance. The VDM metrics correlated well with artifact conspicuity and human performance. PMID:22616029

  11. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    Energy Technology Data Exchange (ETDEWEB)

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  12. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    Science.gov (United States)

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  13. A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    Directory of Open Access Journals (Sweden)

    R. Gomathi

    2014-01-01

    Full Text Available This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.

  14. Unveil Compressed Sensing

    OpenAIRE

    Liu, Xiteng

    2013-01-01

    We discuss the applicability of compressed sensing theory. We take a genuine look at both experimental results and theoretical works. We answer the following questions: 1) What can compressed sensing really do? 2) More importantly, why?

  15. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  16. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  17. Hyperspectral data compression

    CERN Document Server

    Motta, Giovanni; Storer, James A

    2006-01-01

    Provides a survey of results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery. This work covers topics such as compression architecture, lossless compression, lossy techniques, and more. It also describes a lossless algorithm based on vector quantization.

  18. Compression test assembly

    Science.gov (United States)

    Kariotis, A. H. (Inventor)

    1973-01-01

    A compression test assembly is described which prevents buckling of small diameter rigid specimens undergoing compression testing and permits attachment of extensometers for strain measurements. The test specimen is automatically aligned and laterally supported when compressive force is applied to the end caps and transmitted to the test specimen during testing.

  19. Compressible effect algebras

    Science.gov (United States)

    Gudder, Stan

    2004-08-01

    We define a special type of additive map J on an effect algebra E called a compression. We call J(1) the focus of J and if p is the focus of a compression then p is called a projection. The set of projections in E is denoted by P(E). A compression J is direct if J( a) ≤ a for all a ɛ E. We show that direct compressions are equivalent to projections onto components of cartesian products. An effect algebra E is said to be compressible if every compression on E is uniquely determined by its focus and every compression on E has a supplement. We define and characterize the commutant C(p) of a projection p and show that a compression with focus p is direct if and only if C(p) = E. We show that P(E) is an orthomodular poset. It is proved that the cartesian product of effect algebras is compressible if and only if each component is compressible. We then consider compressible sequential effect algebras, Lüders maps and conditional probabilities.

  20. Bond graph modeling of centrifugal compression systems

    OpenAIRE

    Uddin, Nur; Gravdahl, Jan Tommy

    2015-01-01

    A novel approach to model unsteady fluid dynamics in a compressor network by using a bond graph is presented. The model is intended in particular for compressor control system development. First, we develop a bond graph model of a single compression system. Bond graph modeling offers a different perspective to previous work by modeling the compression system based on energy flow instead of fluid dynamics. Analyzing the bond graph model explains the energy flow during compressor surge. Two pri...

  1. Optimum image compression rate maintaining diagnostic image quality of digital intraoral radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Song, Ju Seop; Koh, Kwang Joon [Dept. of Oral and Maxillofacial Radiology and Institute of Oral Bio Science, School of Dentistry, Chonbuk National University, Chonju (Korea, Republic of)

    2000-12-15

    The aims of the present study are to determine the optimum compression rate in terms of file size reduction and diagnostic quality of the images after compression and evaluate the transmission speed of original or each compressed images. The material consisted of 24 extracted human premolars and molars. The occlusal surfaces and proximal surfaces of the teeth had a clinical disease spectrum that ranged from sound to varying degrees of fissure discoloration and cavitation. The images from Digora system were exported in TIFF and the images from conventional intraoral film were scanned and digitalized in TIFF by Nikon SF-200 scanner(Nikon, Japan). And six compression factors were chosen and applied on the basis of the results from a pilot study. The total number of images to be assessed were 336. Three radiologists assessed the occlusal and proximal surfaces of the teeth with 5-rank scale. Finally diagnosed as either sound or carious lesion by one expert oral pathologist. And sensitivity and specificity and kappa value for diagnostic agreement was calculated. Also the area (Az) values under the ROC curve were calculated and paired t-test and oneway ANOVA test was performed. Thereafter, transmission time of the image files of the each compression level were compared with that of the original image files. No significant difference was found between original and the corresponding images up to 7% (1:14) compression ratio for both the occlusal and proximal caries (p<0.05). JPEG3 (1:14) image files are transmitted fast more than 10 times, maintained diagnostic information in image, compared with original image files. 1:14 compressed image file may be used instead of the original image and reduce storage needs and transmission time.

  2. Compressive Force With 2-Screw and 3-Screw Subtalar Joint Arthrodesis With Headless Compression Screws.

    Science.gov (United States)

    Matsumoto, Takumi; Glisson, Richard R; Reidl, Markus; Easley, Mark E

    2016-12-01

    Joint compression is an essential element of successful arthrodesis. Although subtalar joint compression generated by conventional screws has been quantified in the laboratory, compression obtainable with headless screws that rely on variable thread pitch to achieve bony contact has not been assessed. This study measured subtalar joint compression achieved by 2 posteriorly placed contemporary headless, variable-pitch screws, and quantified additional compression gained by placing a third screw anteriorly. Ten, unpaired fresh-frozen cadaveric subtalar joints were fixed sequentially using 2 diverging posterior screws (one directed into the talar dome, the other into the talar neck), 2 parallel posterior screws (both ending in the talar dome), and 2 parallel screws with an additional anterior screw inserted from the plantar calcaneus into the talar neck. Joint compression was quantified directly during screw insertion using a novel custom-built measuring device. The mean compression generated by 2 diverging posterior screws was 246 N. Two parallel posterior screws produced 294 N of compression, and augmentation of that construct with a third, anterior screw increased compression to 345 N (P screw fixation was slightly less than that reported previously for subtalar joint fixation with 2 conventional lag screws, but was comparable when a third screw was added. Under controlled testing conditions, 2 tapered, variable-pitch screws generated somewhat less compression than previously reported for 2-screw fixation with conventional headed screws. A third screw placed anteriorly increased compression significantly. Because headless screws are advantageous where prominent screw heads are problematic, such as the load-bearing surface of the foot, their effectiveness compared to other screws should be established to provide an objective basis for screw selection. Augmenting fixation with an anterior screw may be desirable when conditions for fusion are suboptimal. © The Author

  3. Adiabatic compression of ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.

    1982-01-01

    A study has been made of the compression of collisionless ion rings in an increasing external magnetic field, B/sub e/ = zB/sub e/(t), by numerically implementing a previously developed kinetic theory of ring compression. The theory is general in that there is no limitation on the ring geometry or the compression ratio, lambdaequivalentB/sub e/ (final)/B/sub e/ (initial)> or =1. However, the motion of a single particle in an equilibrium is assumed to be completely characterized by its energy H and canonical angular momentum P/sub theta/ with the absence of a third constant of the motion. The present computational work assumes that plasma currents are negligible, as is appropriate for a low-temperature collisional plasma. For a variety of initial ring geometries and initial distribution functions (having a single value of P/sub theta/), it is found that the parameters for ''fat'', small aspect ratio rings follow general scaling laws over a large range of compression ratios, 1 3 : The ring radius varies as lambda/sup -1/2/; the average single particle energy as lambda/sup 0.72/; the root mean square energy spread as lambda/sup 1.1/; and the total current as lambda/sup 0.79/. The field reversal parameter is found to saturate at values typically between 2 and 3. For large compression ratios the current density is found to ''hollow out''. This hollowing tends to improve the interchange stability of an embedded low β plasma. The implications of these scaling laws for fusion reactor systems are discussed

  4. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  5. zlib compression library

    OpenAIRE

    Gailly, Jean-loup; Adler, Mark

    2004-01-01

    (taken from http://www.gzip.org/ on 2004-12-01) zlib is designed to be a free, general-purpose, legally unencumbered -- that is, not covered by any patents -- lossless data-compression library for use on virtually any computer hardware and operating system. The zlib data format is itself portable across platforms. Unlike the LZW compression method used in Unix compress(1) and in the GIF image format, the compression method currently used in zlib essentially never expands the data. (LZW ca...

  6. Enabling Near Real-Time Remote Search for Fast Transient Events with Lossy Data Compression

    Science.gov (United States)

    Vohl, Dany; Pritchard, Tyler; Andreoni, Igor; Cooke, Jeffrey; Meade, Bernard

    2017-09-01

    We present a systematic evaluation of JPEG2000 (ISO/IEC 15444) as a transport data format to enable rapid remote searches for fast transient events as part of the Deeper Wider Faster programme. Deeper Wider Faster programme uses 20 telescopes from radio to gamma rays to perform simultaneous and rapid-response follow-up searches for fast transient events on millisecond-to-hours timescales. Deeper Wider Faster programme search demands have a set of constraints that is becoming common amongst large collaborations. Here, we focus on the rapid optical data component of Deeper Wider Faster programme led by the Dark Energy Camera at Cerro Tololo Inter-American Observatory. Each Dark Energy Camera image has 70 total coupled-charged devices saved as a 1.2 gigabyte FITS file. Near real-time data processing and fast transient candidate identifications-in minutes for rapid follow-up triggers on other telescopes-requires computational power exceeding what is currently available on-site at Cerro Tololo Inter-American Observatory. In this context, data files need to be transmitted rapidly to a foreign location for supercomputing post-processing, source finding, visualisation and analysis. This step in the search process poses a major bottleneck, and reducing the data size helps accommodate faster data transmission. To maximise our gain in transfer time and still achieve our science goals, we opt for lossy data compression-keeping in mind that raw data is archived and can be evaluated at a later time. We evaluate how lossy JPEG2000 compression affects the process of finding transients, and find only a negligible effect for compression ratios up to 25:1. We also find a linear relation between compression ratio and the mean estimated data transmission speed-up factor. Adding highly customised compression and decompression steps to the science pipeline considerably reduces the transmission time-validating its introduction to the Deeper Wider Faster programme science pipeline and

  7. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  8. Compressive multi-mode superresolution display

    KAUST Repository

    Heide, Felix

    2014-01-01

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image. © 2014 Optical Society of America.

  9. Shock absorbing properties of toroidal shells under compression, 3

    International Nuclear Information System (INIS)

    Sugita, Yuji

    1985-01-01

    The author has previously presented the static load-deflection relations of a toroidal shell subjected to axisymmetric compression between rigid plates and those of its outer half when subjected to lateral compression. In both these cases, the analytical method was based on the incremental Rayleigh-Ritz method. In this paper, the effects of compression angle and strain rate on the load-deflection relations of the toroidal shell are investigated for its use as a shock absorber for the radioactive material shipping cask which must keep its structural integrity even after accidental falls at any angle. Static compression tests have been carried out at four angles of compression, 10 0 , 20 0 , 50 0 , 90 0 and the applications of the preceding analytical method have been discussed. Dynamic compression tests have also been performed using the free-falling drop hammer. The results are compared with those in the static compression tests. (author)

  10. Lossless data compression for improving the performance of a GPU-based beamformer.

    Science.gov (United States)

    Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi

    2015-04-01

    The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.

  11. Interband distortion allocation in lossy compression of hyperspectral imagery: impact on global distortion metrics and discrimination of materials

    Science.gov (United States)

    Lastri, Cinzia; Aiazzi, Bruno; Baronti, Stefano; Alparone, Luciano

    2007-10-01

    The problem of distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is investigated. Distortion is generally measured either as maximum absolute deviation (MAD) for near-lossless methods, e.g. differential pulse code modulation (DPCM), or as mean square error (MSE) for lossy methods (e.g. spectral decorrelation followed by JPEG 2000). Also the absolute angular error, or spectral angle mapper (SAM), is used to quantify spectral distortion. A band add-on (BAO) technique was recently introduced to calculate a modified version of SAM. Spectral bands are iteratively selected in order to increase the angular separation between two pixel spectra by exploiting a mathematical decomposition of SAM. As a consequence, only a subset of the original hyperspectral bands contributes to the new distance metrics, referred to as BAO-SAM, whose operational definition guarantees its monotonicity as the number of bands increases. Two strategies of interband distortion allocation are compared: given a target average bit rate, distortion, either MAD or MSE, may be set to be constant varying with wavelength. Otherwise it may be allocated proportionally to the noise level on each band, according to the virtually-lossless protocol. Thus, a different quantization step size depending on the estimated standard deviation of the noise, is used to quantize either prediction residuals (DPCM) or wavelet coefficients (JPEG 2000) of each spectral band, thereby determining band-varying MAD/MSE values. Comparisons with the uncompressed originals show that the average spectral angle mapper (SAM) is minimized by constant distortion allocation. Conversely, the average BAO-SAM is minimized by the noise-adjusted variable spectral distortion allocation according to the virtually lossless protocol. Preliminary results of simulations performed on reflectance data obtained from compressed radiance data show that, for a given compression ratio, the virtually

  12. A comparative study of lossless compression algorithms on multispectral imager data

    Science.gov (United States)

    Grossberg, Michael; Gottipati, Srikanth; Gladkova, Irina; Rabinowitz, Malka; Alabi, Paul; George, Tence; Pacheco, Amnia

    2009-05-01

    This paper reports a comparative study of current lossless compression algorithms for data from a representative selection of satellite based earth science multispectral imagers. The study includes the performance of compression algorithms on Advanced Very High Resolution Radiometer(AVHRR), SEVIRI, the Moderate Resolution Imaging Spectroradiometer(MODIS) imager, as well as a subset of MODIS bands as a proxy for the upcoming GOES-R series. SEVIRI aboard the ESA/EUMETSAT operated Meteosat Second Generation (MSG) satellites is a geostationary imager. The AVHRR aboard the NOAA Polar Orbiting Environmental Satellites and MODIS aboard the NASA Terra and Aqua satellites have polar orbits. Thus this study will present representatives from both polar and geostationary orbiting imagers. The imagers we include have sensors for both reflected and emissive radiance. We also note that the older satellites have coarser quantizations and present our conclusions on the impact on compression ratios. Faced with a enormous growing large volume of data on a new emerging current generation images from faster scanning, finer spatial resolution, and greater spectral resolution, this study provides a comparison of current compression algorithms as a baseline for future work. With growing satellite Earth science multispectral imager volume data, it becomes increasingly important to evaluate which compression algorithms are most appropriate for data management in transmission and archiving. This comparative compression study uses a wide range standard implementations of the leading lossless compression algorithms. Examples include image compression algorithms such as PNG and JPEG2000, and widely-used file compression formats such as BZIP2 and 7z. This study includes a comparison with the Consultative Committee for Space Data Systems (CCSDS) recommended Szip software which uses the extended-Rice lossless compression algorithm as well as the most recent recommended compression standard which

  13. Efficient Learning of Image Super-Resolution and Compression Artifact Removal with Semi-Local Gaussian Processes.

    Science.gov (United States)

    Kwon, Younghee; Kim, Kwang In; Tompkin, James; Kim, Jin Hyung; Theobalt, Christian

    2015-09-01

    Improving the quality of degraded images is a key problem in image processing, but the breadth of the problem leads to domain-specific approaches for tasks such as super-resolution and compression artifact removal. Recent approaches have shown that a general approach is possible by learning application-specific models from examples; however, learning models sophisticated enough to generate high-quality images is computationally expensive, and so specific per-application or per-dataset models are impractical. To solve this problem, we present an efficient semi-local approximation scheme to large-scale Gaussian processes. This allows efficient learning of task-specific image enhancements from example images without reducing quality. As such, our algorithm can be easily customized to specific applications and datasets, and we show the efficiency and effectiveness of our approach across five domains: single-image super-resolution for scene, human face, and text images, and artifact removal in JPEG- and JPEG 2000-encoded images.

  14. Testing framework for compression methods

    OpenAIRE

    Štoček, Ondřej

    2008-01-01

    There are many algorithms for data compression. These compression methods often achieve different compression rate and also use computer resources differently. In practice a combination of compression is usually used instead of standalone compression methods. The software tool can be evolved, where we can easily combine existing compression methods to new one and test it consequently. Main goal of this work is to propound such tool and implement it. Further goal is to implement basic library ...

  15. Three novel lossless image compression schemes for medical image archiving and telemedicine.

    Science.gov (United States)

    Wang, J; Naghdy, G

    2000-01-01

    In this article, three novel lossless image compression schemes, hybrid predictive/vector quantization lossless image coding (HPVQ), shape-adaptive differential pulse code modulation (DPCM) (SADPCM), and shape-VQ-based hybrid ADPCM/DCT (ADPCMDCT) are introduced. All are based on the lossy coder, VQ. However, VQ is used in these new schemes as a tool to improve the decorrelation efficiency of those traditional lossless predictive coders such as DPCM, adaptive DPCM (ADPCM), and multiplicative autoregressive coding (MAR). A new kind of VQ, shape-VQ, is also introduced in this article. It provides predictive coders useful information regarding the shape characters of image block. These enhance the performance of predictive coders in the context of lossless coding. Simulation results of the proposed coders applied in lossless medical image compression are presented. Some leading lossless techniques such as DPCM, hierarchical interfold (HINT), CALIC, and the standard lossless JPEG are included in the tests. Promising results show that all these three methods are good candidates for lossless medical image compression.

  16. A complexity-efficient and one-pass image compression algorithm for wireless capsule endoscopy.

    Science.gov (United States)

    Liu, Gang; Yan, Guozheng; Zhao, Shaopeng; Kuang, Shuai

    2015-01-01

    As an important part of the application-specific integrated circuit (ASIC) in wireless capsule endoscopy (WCE), the efficient compressor is crucial for image transmission and power consumption. In this paper, a complexity-efficient and one-pass image compression method is proposed for WCE with Bayer format images. The algorithm is modified from the standard lossless algorithm (JPEG-LS). Firstly, a causal interpolation is used to acquire the context template of a current pixel to be encoded, thus determining different encoding modes. Secondly, a gradient predictor, instead of the median predictor, is designed to improve the accuracy of the predictions. Thirdly, the gradient context is quantized to obtain the context index (Q). Eventually, the encoding process is achieved in different modes. The experimental and comparative results show that our proposed near-lossless compression method provides a high compression rate (2.315) and a high image quality (46.31 dB) compared with other methods. It performs well in the designed wireless capsule system and could be applied in other image fields.

  17. Radio frequency pulse compression

    International Nuclear Information System (INIS)

    Farkas, Z.D.

    1988-12-01

    High gradients require peak powers. One possible way to generate high peak powers is to generate a relatively long pulse at a relatively low power and compress it into a shorter pulse with higher peak power. It is possible to compress before dc to rf conversion as is done for the relativistic klystron or after dc to rf conversion as is done with SLED. In this note only radio frequency pulse compression (RFPC) is considered. Three methods of RFPC will be discussed: SLED, BEC, and REC. 3 refs., 8 figs., 1 tab

  18. ICER-3D Hyperspectral Image Compression Software

    Science.gov (United States)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

  19. Compressed Video Segmentation

    National Research Council Canada - National Science Library

    Kobla, Vikrant; Doermann, David S; Rosenfeld, Azriel

    1996-01-01

    ... changes in content and camera motion. The analysis is performed in the compressed domain using available macroblock and motion vector information, and if necessary, discrete cosine transform (DCT) information...

  20. Mechanical chest compressions.

    Science.gov (United States)

    Pomeroy, Matthew

    2012-09-13

    The authors of this study state that there is a lack of evidence about the efficiency of mechanical devices in producing chest compressions as an adjunct to resuscitation during cardiorespiratory arrest.

  1. Biaxial compression test technique

    Science.gov (United States)

    Hansard, E. T.

    1975-01-01

    Fixture and technique have been developed for predicting behavior of stiffened skin panels under biaxial compressive loading. Tester can load test panel independently in longitudinal and transverse directions. Data can also be obtained in combined mode.

  2. Muon cooling: longitudinal compression.

    Science.gov (United States)

    Bao, Yu; Antognini, Aldo; Bertl, Wilhelm; Hildebrandt, Malte; Khaw, Kim Siang; Kirch, Klaus; Papa, Angela; Petitjean, Claude; Piegsa, Florian M; Ritt, Stefan; Sedlak, Kamil; Stoykov, Alexey; Taqqu, David

    2014-06-06

    A 10  MeV/c positive muon beam was stopped in helium gas of a few mbar in a magnetic field of 5 T. The muon "swarm" has been efficiently compressed from a length of 16 cm down to a few mm along the magnetic field axis (longitudinal compression) using electrostatic fields. The simulation reproduces the low energy interactions of slow muons in helium gas. Phase space compression occurs on the order of microseconds, compatible with the muon lifetime of 2  μs. This paves the way for the preparation of a high-quality low-energy muon beam, with an increase in phase space density relative to a standard surface muon beam of 10^{7}. The achievable phase space compression by using only the longitudinal stage presented here is of the order of 10^{4}.

  3. An algorithm for compression of bilevel images.

    Science.gov (United States)

    Reavy, M D; Boncelet, C G

    2001-01-01

    This paper presents the block arithmetic coding for image compression (BACIC) algorithm: a new method for lossless bilevel image compression which can replace JBIG, the current standard for bilevel image compression. BACIC uses the block arithmetic coder (BAC): a simple, efficient, easy-to-implement, variable-to-fixed arithmetic coder, to encode images. BACIC models its probability estimates adaptively based on a 12-bit context of previous pixel values; the 12-bit context serves as an index into a probability table whose entries are used to compute p(1) (the probability of a bit equaling one), the probability measure BAC needs to compute a codeword. In contrast, the Joint Bilevel Image Experts Group (JBIG) uses a patented arithmetic coder, the IBM QM-coder, to compress image data and a predetermined probability table to estimate its probability measures. JBIG, though, has not get been commercially implemented; instead, JBIG's predecessor, the Group 3 fax (G3), continues to be used. BACIC achieves compression ratios comparable to JBIG's and is introduced as an alternative to the JBIG and G3 algorithms. BACIC's overall compression ratio is 19.0 for the eight CCITT test images (compared to JBIG's 19.6 and G3's 7.7), is 16.0 for 20 additional business-type documents (compared to JBIG's 16.0 and G3's 6.74), and is 3.07 for halftone images (compared to JBIG's 2.75 and G3's 0.50).

  4. Deterministic Compressed Sensing

    Science.gov (United States)

    2011-11-01

    programs. Examples of such algorithms are the interior point methods [51, 52], Lasso modification to LARS [106, 171], homotopy methods [99], weighted...component analysis . IEEE Signal Processing Letters, 9(2):40–42, 2002. [171] S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. A method for...53 7.3 Analysis of the GAME Algorithm . . . . . . . . . . . . . . . . . . . . 57 III Expander-Based Compressed Sensing 61 8 Efficient Compressed

  5. Blind Compressed Sensing

    OpenAIRE

    Gleichman, Sivan; Eldar, Yonina C.

    2011-01-01

    The fundamental principle underlying compressed sensing is that a signal, which is sparse under some basis representation, can be recovered from a small number of linear measurements. However, prior knowledge of the sparsity basis is essential for the recovery process. This work introduces the concept of blind compressed sensing, which avoids the need to know the sparsity basis in both the sampling and the recovery process. We suggest three possible constraints on the sparsity basis that can ...

  6. Theoretical approaches to chemical dynamics in highly compressed fluids

    International Nuclear Information System (INIS)

    Calef, D.F.

    1987-01-01

    Methods that have been developed in the chemical physics community over the previous decade are applied to problems involving the dynamic chemical behavior of fluids under highly compressed conditions. The methods require detailed structural information about the environment seen by the reacting molecules. These methods are briefly reviewed. Examples for both statically compressed and shock conditions are discussed

  7. Terminology: resistance or stiffness for medical compression stockings?

    Directory of Open Access Journals (Sweden)

    André Cornu-Thenard

    2013-04-01

    Full Text Available Based on previous experimental work with medical compression stockings it is proposed to restrict the term stiffness to measurements on the human leg and rather to speak about resistance when it comes to characterize the elastic property of compression hosiery in the textile laboratory.

  8. Normalized Compression Distance of Multisets with Applications.

    Science.gov (United States)

    Cohen, Andrew R; Vitányi, Paul M B

    2015-08-01

    Pairwise normalized compression distance (NCD) is a parameter-free, feature-free, alignment-free, similarity metric based on compression. We propose an NCD of multisets that is also metric. Previously, attempts to obtain such an NCD failed. For classification purposes it is superior to the pairwise NCD in accuracy and implementation complexity. We cover the entire trajectory from theoretical underpinning to feasible practice. It is applied to biological (stem cell, organelle transport) and OCR classification questions that were earlier treated with the pairwise NCD. With the new method we achieved significantly better results. The theoretic foundation is Kolmogorov complexity.

  9. Normalized Compression Distance of Multisets with Applications

    Science.gov (United States)

    Cohen, Andrew R.; Vitányi, Paul M.B.

    2015-01-01

    Pairwise normalized compression distance (NCD) is a parameter-free, feature-free, alignment-free, similarity metric based on compression. We propose an NCD of multisets that is also metric. Previously, attempts to obtain such an NCD failed. For classification purposes it is superior to the pairwise NCD in accuracy and implementation complexity. We cover the entire trajectory from theoretical underpinning to feasible practice. It is applied to biological (stem cell, organelle transport) and OCR classification questions that were earlier treated with the pairwise NCD. With the new method we achieved significantly better results. The theoretic foundation is Kolmogorov complexity. PMID:26352998

  10. Channel box compression device

    International Nuclear Information System (INIS)

    Nakamizo, Hiroshi; Tanaka, Yuki.

    1996-01-01

    The device of the present invention reduces the volume of spent fuel channel boxes of power plant facilities to eliminate secondary wastes, suppress generation of radiation sources and improve storage space efficiency. The device has a box-like shape. A support frame is disposed on the lateral side of the box for supporting spent channel boxes. A horizontal transferring unit and a vertical transferring compression unit driven by a driving mechanism are disposed in the support frame. Further, the compression unit may have a rotational compression roller so as to move freely in the transferring unit. In addition, the transferring unit and the driving mechanism may be disposed outside of pool water. With such a constitution, since spent channel boxes are compressed and bent by horizontal movement of the transferring unit and the vertical movement of the compression unit, no cut pieces or cut powders are generated. Further, if the transferring unit and the driving mechanism are disposed outside of the pool water, it is not necessary to make them waterproof, which facilitates the maintenance. (I.S.)

  11. Optical pulse compression

    International Nuclear Information System (INIS)

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  12. Data compression considerations for detectors with local intelligence

    International Nuclear Information System (INIS)

    Garcia-Sciveres, M; Wang, X

    2014-01-01

    This note summarizes the outcome of discussions about how data compression considerations apply to tracking detectors with local intelligence. The method for analyzing data compression efficiency is taken from a previous publication and applied to module characteristics from the WIT2014 workshop. We explore local intelligence and coupled layer structures in the language of data compression. In this context the original intelligent tracker concept of correlating hits to find matches of interest and discard others is just a form of lossy data compression. We now explore how these features (intelligence and coupled layers) can be exploited for lossless compression, which could enable full readout at higher trigger rates than previously envisioned, or even triggerless

  13. Celiac artery compression syndrome.

    Science.gov (United States)

    Kokotsakis, J N; Lambidis, C D; Lioulias, A G; Skouteli, E T; Bastounis, E A; Livesay, J J

    2000-04-01

    Celiac artery compression syndrome occurs when the median arcuate ligament of the diaphragm causes extrinsic compression of the celiac trunk. We report a case of a 65-year-old woman who presented with a three-month history of postprandial abdominal pain, nausea and some emesis, without weight loss. There was a bruit in the upper mid-epigastrium and the lateral aortic arteriography revealed a significant stenosis of the celiac artery. At operation, the celiac axis was found to be severely compressed anteriorly by fibers forming the inferior margin of the arcuate ligament of the diaphragm. The ligament was cut and a vein by-pass from the supraceliac aorta to the distal celiac artery was performed. The patient remains well and free of symptoms two and a half years since operation.In this report we discuss the indications and the therapeutic options of this syndrome as well as a review of the literature is being given.

  14. Isentropic Compression of Argon

    International Nuclear Information System (INIS)

    Oona, H.; Solem, J.C.; Veeser, L.R.; Ekdahl, C.A.; Rodriquez, P.J.; Younger, S.M.; Lewis, W.; Turley, W.D.

    1997-01-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal

  15. Can a modified anterior external fixator provide posterior compression of AP compression type III pelvic injuries?

    Science.gov (United States)

    Sellei, Richard Martin; Schandelmaier, Peter; Kobbe, Philipp; Knobe, Matthias; Pape, Hans-Christoph

    2013-09-01

    Current anterior fixators can close a disrupted anterior pelvic ring. However, these anterior constructs cannot create posterior compressive forces across the sacroiliac joint. We explored whether a modified fixator could create such forces. We determined whether (1) an anterior external fixator with a second anterior articulation (X-frame) would provide posterior pelvic compression and (2) full pin insertion would deliver higher posterior compressive forces than half pin insertion. We simulated AP compression Type III instability with plastic pelvis models and tested the following conditions: (1) single-pin supraacetabular external fixator (SAEF) using half pin insertion (60 mm); (2) SAEF using full pin insertion (120 mm); (3) modified fixator with X-frame using half pin insertion; (4) modified fixator using full pin insertion; and (5) C-clamp. Standardized fracture compression in the anterior and posterior compartment was performed as in previous studies by Gardner. A force-sensitive sensor was placed in the symphysis and posterior pelvic ring before fracture reduction and the fractures were reduced. The symphyseal and sacroiliac compression loads of each application were measured. The SAEF exerted mean compressions of 13 N and 14 N to the posterior pelvic ring using half and full pin insertions, respectively. The modified fixator had mean posterior compressions of 174 N and 222 N with half and full pin insertions, respectively. C-clamp application exerted a mean posterior load of 407 N. Posterior compression on the pelvis was improved using an X-frame as an anterior fixation device in a synthetic pelvic fracture model. This additive device may improve the initial anterior and posterior stability in the acute management of unstable and life-threatening pelvic ring injuries.

  16. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...... correlation to the curing time. The experiments show no correlation between the anisotropy and the curing time and a small strength difference between the two drilling directions. The literature shows variations on which drilling direction that is strongest. Based on a Monto Carlo simulation of the expected...

  17. Image data compression investigation

    Science.gov (United States)

    Myrie, Carlos

    1989-01-01

    NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.

  18. Energy transfer in compressible magnetohydrodynamic turbulence

    Science.gov (United States)

    Grete, Philipp; O'Shea, Brian W.; Beckwith, Kris; Schmidt, Wolfram; Christlieb, Andrew

    2017-09-01

    Magnetic fields, compressibility, and turbulence are important factors in many terrestrial and astrophysical processes. While energy dynamics, i.e., how energy is transferred within and between kinetic and magnetic reservoirs, has been previously studied in the context of incompressible magnetohydrodynamic (MHD) turbulence, we extend shell-to-shell energy transfer analysis to the compressible regime. We derive four new transfer functions specifically capturing compressibility effects in the kinetic and magnetic cascade, and capturing energy exchange via magnetic pressure. To illustrate their viability, we perform and analyze four simulations of driven isothermal MHD turbulence in the sub- and supersonic regime with two different codes. On the one hand, our analysis reveals robust characteristics across regime and numerical method. For example, energy transfer between individual scales is local and forward for both cascades with the magnetic cascade being stronger than the kinetic one. Magnetic tension and magnetic pressure related transfers are less local and weaker than the cascades. We find no evidence for significant nonlocal transfer. On the other hand, we show that certain functions, e.g., the compressive component of the magnetic energy cascade, exhibit a more complex behavior that varies both with regime and numerical method. Having established a basis for the analysis in the compressible regime, the method can now be applied to study a broader parameter space.

  19. Temporal compressive sensing systems

    Science.gov (United States)

    Reed, Bryan W.

    2017-12-12

    Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.

  20. Compressive CFAR Radar Processing

    NARCIS (Netherlands)

    Anitori, L.; Rossum, W.L. van; Otten, M.P.G.; Maleki, A.; Baraniuk, R.

    2013-01-01

    In this paper we investigate the performance of a combined Compressive Sensing (CS) Constant False Alarm Rate (CFAR) radar processor under different interference scenarios using both the Cell Averaging (CA) and Order Statistic (OS) CFAR detectors. Using the properties of the Complex Approximate

  1. Compressive CFAR radar detection

    NARCIS (Netherlands)

    Anitori, L.; Otten, M.P.G.; Rossum, W.L. van; Maleki, A.; Baraniuk, R.

    2012-01-01

    In this paper we develop the first Compressive Sensing (CS) adaptive radar detector. We propose three novel architectures and demonstrate how a classical Constant False Alarm Rate (CFAR) detector can be combined with ℓ1-norm minimization. Using asymptotic arguments and the Complex Approximate

  2. Gas compression infrared generator

    International Nuclear Information System (INIS)

    Hug, W.F.

    1980-01-01

    A molecular gas is compressed in a quasi-adiabatic manner to produce pulsed radiation during each compressor cycle when the pressure and temperature are sufficiently high, and part of the energy is recovered during the expansion phase, as defined in U.S. Pat. No. 3,751,666; characterized by use of a cylinder with a reciprocating piston as a compressor

  3. Multiple snapshot compressive beamforming

    DEFF Research Database (Denmark)

    Gerstoft, Peter; Xenaki, Angeliki; Mecklenbrauker, Christoph F.

    2015-01-01

    For sound fields observed on an array, compressive sensing (CS) reconstructs the multiple source signals at unknown directions-of-arrival (DOAs) using a sparsity constraint. The DOA estimation is posed as an underdetermined problem expressing the field at each sensor as a phase-lagged superposition...

  4. Nonlinear Frequency Compression

    Science.gov (United States)

    Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-01-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality. PMID:23539261

  5. Medical image compression using DCT-based subband decomposition and modified SPIHT data organization.

    Science.gov (United States)

    Chen, Yen-Yu

    2007-10-01

    The work proposed a novel bit-rate-reduced approach for reducing the memory required to store a remote diagnosis and rapidly transmission it. In the work, an 8x8 Discrete Cosine Transform (DCT) approach is adopted to perform subband decomposition. Modified set partitioning in hierarchical trees (SPIHT) is then employed to organize data and entropy coding. The translation function can store the detailed characteristics of an image. A simple transformation to obtain DCT spectrum data in a single frequency domain decomposes the original signal into various frequency domains that can further compressed by wavelet-based algorithm. In this scheme, insignificant DCT coefficients that correspond to a particular spatial location in the high-frequency subbands can be employed to reduce redundancy by applying a proposed combined function in association with the modified SPIHT. Simulation results showed that the embedded DCT-CSPIHT image compression reduced the computational complexity to only a quarter of the wavelet-based subband decomposition, and improved the quality of the reconstructed medical image as given by both the peak signal-to-noise ratio (PSNR) and the perceptual results over JPEG2000 and the original SPIHT at the same bit rate. Additionally, since 8x8 fast DCT hardware implementation being commercially available, the proposed DCT-CSPIHT can perform well in high speed image coding and transmission.

  6. Visually improved image compression by combining a conventional wavelet-codec with texture modeling.

    Science.gov (United States)

    Nadenau, Marcus J; Reichel, Julien; Kunt, Murat

    2002-01-01

    Human observers are very sensitive to a loss of image texture in photo-realistic images. For example a portrait image without the fine skin texture appears unnatural. Once the image is decomposed by a wavelet transformation, this texture is represented by many wavelet coefficients of low- and medium-amplitude. The conventional encoding of all these coefficients is very bitrate expensive. Instead, such an unstructured or stochastic texture can be modeled by a noise process and be characterized with very few parameters. Thus, a hybrid scheme can be designed that encodes the structural image information by a conventional wavelet codec and the stochastic texture in a model-based manner. Such a scheme, called WITCH (Wavelet-based Image/Texture Coding Hybrid), is proposed. It implements such an hybrid coding approach, while nevertheless preserving the features of progressive and lossless coding. Its low computational complexity and the parameter coding costs of only 0.01 bpp make it a valuable extension of conventional codecs. A comparison with the JPEG2000 image compression standard showed that the WITCH-scheme achieves the same subjective quality while increasing the compression ratio by more than a factor of two.

  7. Ultrahigh Pressure Dynamic Compression

    Science.gov (United States)

    Duffy, T. S.

    2017-12-01

    Laser-based dynamic compression provides a new opportunity to study the lattice structure and other properties of geological materials to ultrahigh pressure conditions ranging from 100 - 1000 GPa (1 TPa) and beyond. Such studies have fundamental applications to understanding the Earth's core as well as the interior structure of super-Earths and giant planets. This talk will review recent dynamic compression experiments using high-powered lasers on materials including Fe-Si, MgO, and SiC. Experiments were conducted at the Omega laser (University of Rochester) and the Linac Coherent Light Source (LCLS, Stanford). At Omega, laser drives as large as 2 kJ are applied over 10 ns to samples that are 50 microns thick. At peak compression, the sample is probed with quasi-monochromatic X-rays from a laser-plasma source and diffraction is recorded on image plates. At LCLS, shock waves are driven into the sample using a 40-J laser with a 10-ns pulse. The sample is probed with X-rays form the LCLS free electron laser providing 1012 photons in a monochromatic pulse near 10 keV energy. Diffraction is recorded using pixel array detectors. By varying the delay between the laser and the x-ray beam, the sample can be probed at various times relative to the shock wave transiting the sample. By controlling the shape and duration of the incident laser pulse, either shock or ramp (shockless) loading can be produced. Ramp compression produces less heating than shock compression, allowing samples to be probed to ultrahigh pressures without melting. Results for iron alloys, oxides, and carbides provide new constraints on equations of state and phase transitions that are relevant to the interior structure of large, extrasolar terrestrial-type planets.

  8. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  9. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    Science.gov (United States)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  10. Learning random networks for compression of still and moving images

    Science.gov (United States)

    Gelenbe, Erol; Sungur, Mert; Cramer, Christopher

    1994-01-01

    Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.

  11. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  12. Detection of Modified Matrix Encoding Using Machine Learning and Compressed Sensing

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Josef D [ORNL

    2011-01-01

    In recent years, driven by the development of steganalysis methods, steganographic algorithms have been evolved rapidly with the ultimate goal of an unbreakable embedding procedure, resulting in recent steganographic algorithms with minimum distortions, exemplified by the recent family of Modified Matrix Encoding (MME) algorithms, which has shown to be most difficult to be detected. In this paper we propose a compressed sensing based on approach for intrinsic steganalysis to detect MME stego messages. Compressed sensing is a recently proposed mathematical framework to represent an image (in general, a signal) using a sparse representation relative to an overcomplete dictionary by minimizing the l1-norm of resulting coefficients. Here we first learn a dictionary from a training set so that the performance will be optimized using the KSVD algorithm; since JPEG images are processed by 8x8 blocks, the training examples are 8x8 patches, rather than the entire images and this increases the generalization of compressed sensing. For each 8x8 block, we compute its sparse representation using OMP (orthogonal matching pursuit) algorithm. Using computed sparse representations, we train a support vector machine (SVM) to classify 8x8 blocks into stego and non-stego classes. Then given an input image, we first divide it into 8x8 blocks. For each 8x8 block, we compute its sparse representation and classify it using the trained SVM. After all the 8x8 blocks are classified, the entire image is classified based on the majority rule of 8x8 block classification results. This allows us to achieve a robust decision even when 8x8 blocks can be classified only with relatively low accuracy. We have tested the proposed algorithm on two datasets (Corel-1000 dataset and a remote sensing image dataset) and have achieved 100% accuracy on classifying images, even though the accuracy of classifying 8x8 blocks is only 80.89%. Key Words Compressed Sensing, Sparcity, Data Dictionary, Steganography

  13. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  14. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  15. Fingerprints in compressed strings

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Cording, Patrick Hagge

    2017-01-01

    In this paper we show how to construct a data structure for a string S of size N compressed into a context-free grammar of size n that supports efficient Karp–Rabin fingerprint queries to any substring of S. That is, given indices i and j, the answer to a query is the fingerprint of the substring S......[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(log⁡N) query time, and for Linear SLPs (an SLP derivative that captures LZ78 compression and its variations) we get O(log⁡log⁡N) query time...

  16. Application of inversions to lossless image compression

    Science.gov (United States)

    Arnavut, Ziya

    1997-04-01

    Linear prediction schemes, such as that of the Joint Photographic Experts Group (JPEG), are simple and normally produces a residual sequence with lower zero-order entry. Occasionally the entropy of the prediction error becomes greater than that of the original image. Such situations frequently occur when the image data have discrete gray levels located within certain intervals. To alleviate this problem, various authors have suggested different preprocessing methods. However, the techniques reported require two passes. We extend the definition of Lehmer-type inversions (Lehmer 1950 and 1964) from permutations to multiset permutations and present a one-pass algorithm based on inversions of a multiset permutation. We obtain comparable results when we apply JPEG and even better results when we apply some other linear prediction schemes on a preprocessed image, which is treated as multiset permutation.

  17. Pattern transitions in a compressible floating elastic sheet.

    Science.gov (United States)

    Oshri, Oz; Diamant, Haim

    2017-09-13

    Thin rigid sheets floating on a liquid substrate appear, for example, in coatings and surfactant monolayers. Upon uniaxial compression the sheet undergoes transitions from a compressed flat state to a periodic wrinkled pattern to a localized folded pattern. The stability of these states is determined by the in-plane elasticity of the sheet, its bending rigidity, and the hydrostatics of the underlying liquid. Wrinkles and folds, and the wrinkle-to-fold transition, were previously studied for incompressible sheets. In the present work we extend the theory to include finite compressibility. We analyze the details of the flat-to-wrinkle transition, the effects of compressibility on wrinkling and folding, and the compression field associated with pattern formation. The state diagram of the floating sheet including all three states is presented.

  18. Compressive CFAR Radar Processing

    OpenAIRE

    Anitori, Laura; Baraniuk, Richard; Maleki, Arian; Otten, Matern; van Rossum, Wim

    2013-01-01

    In this paper we investigate the performance of a combined Compressive Sensing (CS) Constant False Alarm Rate (CFAR) radar processor under different interference scenarios using both the Cell Averaging (CA) and Order Statistic (OS) CFAR detectors. Using the properties of the Complex Approximate Message Passing (CAMP) algorithm, we demonstrate that the behavior of the CFAR processor is independent of the combination with the non-linear recovery and therefore its performance can be predicted us...

  19. Universal Compressed Sensing

    OpenAIRE

    Jalali, Shirin; Poor, H. Vincent

    2014-01-01

    In this paper, the problem of developing universal algorithms for compressed sensing of stochastic processes is studied. First, R\\'enyi's notion of information dimension (ID) is generalized to analog stationary processes. This provides a measure of complexity for such processes and is connected to the number of measurements required for their accurate recovery. Then a minimum entropy pursuit (MEP) optimization approach is proposed, and it is proven that it can reliably recover any stationary ...

  20. Kalman Filtered Compressed Sensing

    OpenAIRE

    Vaswani, Namrata

    2008-01-01

    We consider the problem of reconstructing time sequences of spatially sparse signals (with unknown and time-varying sparsity patterns) from a limited number of linear "incoherent" measurements, in real-time. The signals are sparse in some transform domain referred to as the sparsity basis. For a single spatial signal, the solution is provided by Compressed Sensing (CS). The question that we address is, for a sequence of sparse signals, can we do better than CS, if (a) the sparsity pattern of ...

  1. Scale adaptive compressive tracking.

    Science.gov (United States)

    Zhao, Pengpeng; Cui, Shaohui; Gao, Min; Fang, Dan

    2016-01-01

    Recently, the compressive tracking (CT) method (Zhang et al. in Proceedings of European conference on computer vision, pp 864-877, 2012) has attracted much attention due to its high efficiency, but it cannot well deal with the scale changing objects due to its constant tracking box. To address this issue, in this paper we propose a scale adaptive CT approach, which adaptively adjusts the scale of tracking box with the size variation of the objects. Our method significantly improves CT in three aspects: Firstly, the scale of tracking box is adaptively adjusted according to the size of the objects. Secondly, in the CT method, all the compressive features are supposed independent and equal contribution to the classifier. Actually, different compressive features have different confidence coefficients. In our proposed method, the confidence coefficients of features are computed and used to achieve different contribution to the classifier. Finally, in the CT method, the learning parameter λ is constant, which will result in large tracking drift on the occasion of object occlusion or large scale appearance variation. In our proposed method, a variable learning parameter λ is adopted, which can be adjusted according to the object appearance variation rate. Extensive experiments on the CVPR2013 tracking benchmark demonstrate the superior performance of the proposed method compared to state-of-the-art tracking algorithms.

  2. Compressed sensing electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Leary, Rowan, E-mail: rkl26@cam.ac.uk [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ (United Kingdom); Saghi, Zineb; Midgley, Paul A. [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ (United Kingdom); Holland, Daniel J. [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom)

    2013-08-15

    The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform.

  3. On music genre classification via compressive sampling

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    Recent work \\cite{Chang2010} combines low-level acoustic features and random projection (referred to as ``compressed sensing'' in \\cite{Chang2010}) to create a music genre classification system showing an accuracy among the highest reported for a benchmark dataset. This not only contradicts...... previous findings that suggest low-level features are inadequate for addressing high-level musical problems, but also that a random projection of features can improve classification. We reproduce this work and resolve these contradictions....

  4. An analysis of the efficacy of bag-valve-mask ventilation and chest compression during different compression-ventilation ratios in manikin-simulated paediatric resuscitation.

    Science.gov (United States)

    Kinney, S B; Tibballs, J

    2000-01-01

    The ideal chest compression and ventilation ratio for children during performance of cardiopulmonary resuscitation (CPR) has not been determined. The efficacy of chest compression and ventilation during compression ventilation ratios of 5:1, 10:2 and 15:2 was examined. Eighteen nurses, working in pairs, were instructed to provide chest compression and bag-valve-mask ventilation for 1 min with each ratio in random on a child-sized manikin. The subjects had been previously taught paediatric CPR within the last 3 or 5 months. The efficacy of ventilation was assessed by measurement of the expired tidal volume and the number of breaths provided. The rate of chest compression was guided by a metronome set at 100/min. The efficacy of chest compressions was assessed by measurement of the rate and depth of compression. There was no significant difference in the mean tidal volume or the percentage of effective chest compressions delivered for each compression-ventilation ratio. The number of breaths delivered was greatest with the ratio of 5:1. The percentage of effective chest compressions was equal with all three methods but the number of effective chest compressions was greatest with a ratio of 5:1. This study supports the use of a compression-ventilation ratio of 5:1 during two-rescuer paediatric cardiopulmonary resuscitation.

  5. Improved waste water vapor compression distillation technology. [for Spacelab

    Science.gov (United States)

    Johnson, K. L.; Nuccio, P. P.; Reveley, W. F.

    1977-01-01

    The vapor compression distillation process is a method of recovering potable water from crewman urine in a manned spacecraft or space station. A description is presented of the research and development approach to the solution of the various problems encountered with previous vapor compression distillation units. The design solutions considered are incorporated in the preliminary design of a vapor compression distillation subsystem. The new design concepts are available for integration in the next generation of support systems and, particularly, the regenerative life support evaluation intended for project Spacelab.

  6. Mammographic compression in Asian women.

    Directory of Open Access Journals (Sweden)

    Susie Lau

    Full Text Available To investigate: (1 the variability of mammographic compression parameters amongst Asian women; and (2 the effects of reducing compression force on image quality and mean glandular dose (MGD in Asian women based on phantom study.We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD measurement software (Volpara to assess compression force, compression pressure, compressed breast thickness (CBT, breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA slabs.Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p0.05.Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  7. Mammographic compression in Asian women

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    Objectives To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. Methods We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35–80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Results Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p0.05). Conclusions Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD. PMID:28419125

  8. A compression-based model of musical learning

    DEFF Research Database (Denmark)

    Meredith, David

    interpretation of any given piece of music that depends not only on what music the listener has previously heard but also the order in which this previously heard music was presented. The model therefore suggests a pure compression-based explanation for musical memory, musical learning and individual differences...

  9. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  10. Analysis by compression

    DEFF Research Database (Denmark)

    Meredith, David

    MEL is a geometric music encoding language designed to allow for musical objects to be encoded parsimoniously as sets of points in pitch-time space, generated by performing geometric transformations on component patterns. MEL has been implemented in Java and coupled with the SIATEC pattern discov...... discovery algorithm to allow for compact encodings to be generated automatically from in extenso note lists. The MEL-SIATEC system is founded on the belief that music analysis and music perception can be modelled as the compression of in extenso descriptions of musical objects....

  11. Compressive full waveform lidar

    Science.gov (United States)

    Yang, Weiyi; Ke, Jun

    2017-05-01

    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  12. Compression test apparatus

    Science.gov (United States)

    Shanks, G. C. (Inventor)

    1981-01-01

    An apparatus for compressive testing of a test specimen may comprise vertically spaced upper and lower platen members between which a test specimen may be placed. The platen members are supported by a fixed support assembly. A load indicator is interposed between the upper platen member and the support assembly for supporting the total weight of the upper platen member and any additional weight which may be placed on it. Operating means are provided for moving the lower platen member upwardly toward the upper platen member whereby an increasing portion of the total weight is transferred from the load indicator to the test specimen.

  13. Compressive Fatigue in Wood

    DEFF Research Database (Denmark)

    Clorius, Christian Odin; Pedersen, Martin Bo Uhre; Hoffmeyer, Preben

    1999-01-01

    An investigation of fatigue failure in wood subjected to load cycles in compression parallel to grain is presented. Small clear specimens of spruce are taken to failure in square wave formed fatigue loading at a stress excitation level corresponding to 80% of the short term strength. Four...... frequencies ranging from 0.01 Hz to 10 Hz are used. The number of cycles to failure is found to be a poor measure of the fatigue performance of wood. Creep, maximum strain, stiffness and work are monitored throughout the fatigue tests. Accumulated creep is suggested identified with damage and a correlation...

  14. Metal Hydride Compression

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)

    2017-07-01

    Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H2) gas compressor with a

  15. The vertebral biomechanic previous and after kyphoplasty.

    Science.gov (United States)

    Pesce, V; Piazzolla, Andrea; Moretti, L; Carlucci, S; Parato, C; Maxy, P; Moretti, B

    2013-10-01

    The biomechanical understanding of increasing anterior column load with progressing kyphosis leading to subsequent vertebral compression fracture (VCF) established the basic rationale for kyphoplasty. The lumbar spine can support an effort of 500 kg in the axis of the vertebral body, and a bending moment of 20 Nm in flexion. Consequently, if this effort is forward deviated of only 10 cm, the acceptable effort will be reduced to 20 kg so it is important to restore the vertebral anterior wall after a VCF: the authors describe the biomechanical modifications in the spine after kyphoplasty.

  16. Revisiting the Effects of Compressibility on the Rayleigh-Taylor Instability

    International Nuclear Information System (INIS)

    Zhou Qianhong; Li Ding

    2007-01-01

    The effects of compressibility on the Rayleigh-Taylor instability (RTI) are investigated. It is shown that the controversy over compressibility effects in the previous studies is due to improper comparison, in which the density varying effect obscures the real role of compressibility. After eliminating the density varying effect, it is found that the compressibility destabilizes RTI in both the cases of constant density and exponentially varying density when M T or greater values of gravity g, and the increment in the growth rate produced by compressibility depends inversely on the pressure p or the ratio of specific heat Γ

  17. Energy transfer in compressible turbulence

    Science.gov (United States)

    Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre

    1995-01-01

    This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.

  18. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  19. Fingerprints in Compressed Strings

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2013-01-01

    The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries....... That is, given indices i and j, the answer to a query is the fingerprint of the substring S[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(logN) query time, and for Linear SLPs (an SLP...... derivative that captures LZ78 compression and its variations) we get O(loglogN) query time. Hence, our data structures has the same time and space complexity as for random access in SLPs. We utilize the fingerprint data structures to solve the longest common extension problem in query time O(logNlogℓ) and O...

  20. Respiratory sounds compression.

    Science.gov (United States)

    Yadollahi, Azadeh; Moussavi, Zahra

    2008-04-01

    Recently, with the advances in digital signal processing, compression of biomedical signals has received great attention for telemedicine applications. In this paper, an adaptive transform coding-based method for compression of respiratory and swallowing sounds is proposed. Using special characteristics of respiratory sounds, the recorded signals are divided into stationary and nonstationary portions, and two different bit allocation methods (BAMs) are designed for each portion. The method was applied to the data of 12 subjects and its performance in terms of overall signal-to-noise ratio (SNR) values was calculated at different bit rates. The performance of different quantizers was also considered and the sensitivity of the quantizers to initial conditions has been alleviated. In addition, the fuzzy clustering method was examined for classifying the signal into different numbers of clusters and investigating the performance of the adaptive BAM with increasing the number of classes. Furthermore, the effects of assigning different numbers of bits for encoding stationary and nonstationary portions of the signal were studied. The adaptive BAM with variable number of bits was found to improve the SNR values of the fixed BAM by 5 dB. Last, the possibility of removing the training part for finding the parameters of adaptive BAMs for each individual was investigated. The results indicate that it is possible to use a predefined set of BAMs for all subjects and remove the training part completely. Moreover, the method is fast enough to be implemented for real-time application.

  1. Free compression tube. Applications

    Science.gov (United States)

    Rusu, Ioan

    2012-11-01

    During the flight of vehicles, their propulsion energy must overcome gravity, to ensure the displacement of air masses on vehicle trajectory, to cover both energy losses from the friction between a solid surface and the air and also the kinetic energy of reflected air masses due to the impact with the flying vehicle. The flight optimization by increasing speed and reducing fuel consumption has directed research in the aerodynamics field. The flying vehicles shapes obtained through studies in the wind tunnel provide the optimization of the impact with the air masses and the airflow along the vehicle. By energy balance studies for vehicles in flight, the author Ioan Rusu directed his research in reducing the energy lost at vehicle impact with air masses. In this respect as compared to classical solutions for building flight vehicles aerodynamic surfaces which reduce the impact and friction with air masses, Ioan Rusu has invented a device which he named free compression tube for rockets, registered with the State Office for Inventions and Trademarks of Romania, OSIM, deposit f 2011 0352. Mounted in front of flight vehicles it eliminates significantly the impact and friction of air masses with the vehicle solid. The air masses come into contact with the air inside the free compression tube and the air-solid friction is eliminated and replaced by air to air friction.

  2. Compressive Sensing DNA Microarrays

    Directory of Open Access Journals (Sweden)

    Sheikh Mona A

    2009-01-01

    Full Text Available Compressive sensing microarrays (CSMs are DNA-based sensors that operate using group testing and compressive sensing (CS principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM, each sensor responds to a set of targets. We study the problem of designing CSMs that simultaneously account for both the constraints from CS theory and the biochemistry of probe-target DNA hybridization. An appropriate cross-hybridization model is proposed for CSMs, and several methods are developed for probe design and CS signal recovery based on the new model. Lab experiments suggest that in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, out-of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications in which only short hybridization times are allowed.

  3. Introduction to compressible fluid flow

    CERN Document Server

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  4. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  5. Adiabatic compression and radiative compression of magnetic fields

    International Nuclear Information System (INIS)

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  6. Placental complications after a previous cesarean section

    OpenAIRE

    Milošević Jelena; Lilić Vekoslav; Tasić Marija; Radović-Janošević Dragana; Stefanović Milan; Antić Vladimir

    2009-01-01

    Introduction The incidence of cesarean section has been rising in the past 50 years. With the increased number of cesarean sections, the number of pregnancies with the previous cesarean section rises as well. The aim of this study was to establish the influence of the previous cesarean section on the development of placental complications: placenta previa, placental abruption and placenta accreta, as well as to determine the influence of the number of previous cesarean sections on the complic...

  7. TPC data compression

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Jens; Frankenfeld, Ulrich; Lindenstruth, Volker; Plamper, Patrick; Roehrich, Dieter; Schaefer, Erich; W. Schulz, Markus; M. Steinbeck, Timm; Stock, Reinhard; Sulimma, Kolja; Vestboe, Anders; Wiebalck, Arne E-mail: wiebalck@kip.uni-heidelberg.de

    2002-08-21

    In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.

  8. Waves and compressible flow

    CERN Document Server

    Ockendon, Hilary

    2016-01-01

    Now in its second edition, this book continues to give readers a broad mathematical basis for modelling and understanding the wide range of wave phenomena encountered in modern applications.  New and expanded material includes topics such as elastoplastic waves and waves in plasmas, as well as new exercises.  Comprehensive collections of models are used to illustrate the underpinning mathematical methodologies, which include the basic ideas of the relevant partial differential equations, characteristics, ray theory, asymptotic analysis, dispersion, shock waves, and weak solutions. Although the main focus is on compressible fluid flow, the authors show how intimately gasdynamic waves are related to wave phenomena in many other areas of physical science.   Special emphasis is placed on the development of physical intuition to supplement and reinforce analytical thinking. Each chapter includes a complete set of carefully prepared exercises, making this a suitable textbook for students in applied mathematics, ...

  9. Tight bounds for top tree compression

    DEFF Research Database (Denmark)

    Bille, Philip; Fernstrøm, Finn; Gørtz, Inge Li

    2017-01-01

    We consider compressing labeled, ordered and rooted trees using DAG compression and top tree compression. We show that there exists a family of trees such that the size of the DAG compression is always a logarithmic factor smaller than the size of the top tree compression (even for an alphabet...

  10. Compressed Baryonic Matter of Astrophysics

    OpenAIRE

    Guo, Yanjun; Xu, Renxin

    2013-01-01

    Baryonic matter in the core of a massive and evolved star is compressed significantly to form a supra-nuclear object, and compressed baryonic matter (CBM) is then produced after supernova. The state of cold matter at a few nuclear density is pedagogically reviewed, with significant attention paid to a possible quark-cluster state conjectured from an astrophysical point of view.

  11. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  12. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-02-01

    Data compression has become one of the cornerstones of modern astronomical data analysis, with the vast majority of analyses compressing large raw datasets down to a manageable number of informative summaries. In this paper we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  13. Compressibility effect in vortex identification

    Czech Academy of Sciences Publication Activity Database

    Kolář, Václav

    2009-01-01

    Roč. 47, č. 2 (2009), s. 473-475 ISSN 0001-1452 R&D Projects: GA AV ČR IAA200600801 Institutional research plan: CEZ:AV0Z20600510 Keywords : vortex * vortex identification * compressible flows * compressibility effect Subject RIV: BK - Fluid Dynamics Impact factor: 0.990, year: 2009

  14. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  15. METHOD AND APPARATUS FOR INSPECTION OF COMPRESSED DATA PACKAGES

    DEFF Research Database (Denmark)

    2008-01-01

    A method for inspection of compressed data packages, which are transported over a data network, is provided. The data packages comprise a data package header containing control data for securing the correct delivery and interpretation of the package and a payload part containing data to be transf......A method for inspection of compressed data packages, which are transported over a data network, is provided. The data packages comprise a data package header containing control data for securing the correct delivery and interpretation of the package and a payload part containing data......, d) applying the determined compression scheme to at least one search pattern, which has previously been stored in a search key register, and e) comparing the compressed search pattern to the stream of data. The method can be carried out by dedicated hardware....

  16. Hugoniot and refractive indices of bromoform under shock compression

    Directory of Open Access Journals (Sweden)

    Q. C. Liu

    2018-01-01

    Full Text Available We investigate physical properties of bromoform (liquid CHBr3 including compressibility and refractive index under dynamic extreme conditions of shock compression. Planar shock experiments are conducted along with high-speed laser interferometry. Our experiments and previous results establish a linear shock velocity−particle velocity relation for particle velocities below 1.77 km/s, as well as the Hugoniot and isentropic compression curves up to ∼21 GPa. Shock-state refractive indices of CHBr3 up to 2.3 GPa or ∼26% compression, as a function of density, can be described with a linear relation and follows the Gladstone-Dale relation. The velocity corrections for laser interferometry measurements at 1550 nm are also obtained.

  17. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  18. Preoperative screening: value of previous tests.

    Science.gov (United States)

    Macpherson, D S; Snow, R; Lofgren, R P

    1990-12-15

    To determine the frequency of tests done in the year before elective surgery that might substitute for preoperative screening tests and to determine the frequency of test results that change from a normal value to a value likely to alter perioperative management. Retrospective cohort analysis of computerized laboratory data (complete blood count, sodium, potassium, and creatinine levels, prothrombin time, and partial thromboplastin time). Urban tertiary care Veterans Affairs Hospital. Consecutive sample of 1109 patients who had elective surgery in 1988. At admission, 7549 preoperative tests were done, 47% of which duplicated tests performed in the previous year. Of 3096 previous results that were normal as defined by hospital reference range and done closest to the time of but before admission (median interval, 2 months), 13 (0.4%; 95% CI, 0.2% to 0.7%), repeat values were outside a range considered acceptable for surgery. Most of the abnormalities were predictable from the patient's history, and most were not noted in the medical record. Of 461 previous tests that were abnormal, 78 (17%; CI, 13% to 20%) repeat values at admission were outside a range considered acceptable for surgery (P less than 0.001, frequency of clinically important abnormalities of patients with normal previous results with those with abnormal previous results). Physicians evaluating patients preoperatively could safely substitute the previous test results analyzed in this study for preoperative screening tests if the previous tests are normal and no obvious indication for retesting is present.

  19. An efficient and extensible approach for compressing phylogenetic trees

    KAUST Repository

    Matthews, Suzanne J

    2011-01-01

    Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference.Results: On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings.Conclusions: TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community. © 2011 Matthews and Williams; licensee BioMed Central Ltd.

  20. The response of nucleus pulposus cell senescence to static and dynamic compressions in a disc organ culture.

    Science.gov (United States)

    Shi, Jianmin; Pang, Lianglong; Jiao, Shouguo

    2018-04-27

    Mechanical stimuli obviously affect disc nucleus pulposus (NP) biology. Previous studies have indicated that static compression exhibits detrimental effects on disc biology compared with dynamic compression. To study disc NP cell senescence under static compression and dynamic compression in a disc organ culture, porcine discs were cultured and subjected to compression (static compression: 0.4 MPa for 4 h once per day; dynamic compression: 0.4 MPa at a frequency of 1.0 Hz for 4 h once per day) for 7 days using a self-developed mechanically active bioreactor. The non-compressed discs were used as controls. Compared with the dynamic compression, static compression significantly promoted disc NP cell senescence, reflected by the increased senescence-associated β-galactosidase (SA-β-Gal) activity, senescence-associated heterochromatic foci (SAHF) formation and senescence markers expression, and the decreased telomerase (TE) activity and NP matrix biosynthesis. Static compression accelerates disc NP cell senescence compared with the dynamic compression in a disc organ culture. The present study provides that acceleration of NP cell senescence may be involved in previously reported static compression-mediated disc NP degenerative changes. © 2018 The Author(s).

  1. Magnetic field compression using pinch-plasma

    International Nuclear Information System (INIS)

    Koyama, K.; Tanimoto, M.; Matsumoto, Y.; Veno, I.

    1987-01-01

    In a previous report, the method for ultra-high magnetic field compression by using the pinchplasma was discussed. It is summarized as follows. The experiment is performed with the Mather-type plasma focus device tau/sub 1/4/ = 2 μs, I=880 kA at V=20 kV). An initial DC magnetic field is fed by an electromagnet embedded in the inner electrode. The axial component of the magnetic field diverges from the maximum field of 1 kG on the surface of the inner electrode. The density profile deduced from a Mach-Zehnder interferogram with a 2-ns N/sub 2/-laser shows a density dip lasting for 30 ns along the axes. Using the measured density of 8 x 10/sup 18/ cm/sup -3/, the temperature of 1.5 keV and the pressure balance relation, the magnitude of the trapped magnetic field is estimated to be 1.0 MG. The magnitude of the compressed magnetic field is also measured by Faraday rotation in a single-mode quartz fiber and a magnetic pickup soil. A protective polyethylene tube (3-mm o.d.) is used along the central axis through the inner electrode and the discharge chamber. The peak value of the compressed field range from 150 to 190 kG. No signal of the magnetic field appears up to the instance of the maximum pinch

  2. Zero-knowledge universal lossless data compression

    Directory of Open Access Journals (Sweden)

    Fiorini Rodolfo A.

    2017-01-01

    Full Text Available Advanced instrumentation, dealing with nanoscale technology at the current edge of human scientific enquiry, like X-Ray CT, generates an enormous quantity of data from single experiment. The very best modern lossless data compression algorithms use standard approaches and are unable to match high end requirements for mission critical application with full information conservation (a few pixels may vary by com/decom processing. In previous papers published elsewhere, we have already shown that traditional Q Arithmetic can be regarded as a highly sophisticated open logic, powerful and flexible bidirectional formal language of languages, according to “Computational Information Conservation Theory” (CICT. This new awareness can offer competitive approach to guide more convenient algorithm development and application for combinatorial lossless compression. To achieve true lossless com/decom and to overcome traditional constraints, the universal modular arithmetic approach, based on CICT Solid Number (SN concept, is presented. To check practical implementation performance and effectiveness, an example on computational imaging is benchmarked by key performance index and compared to standard well-known lossless compression techniques. Results are critically discussed.

  3. New Regenerative Cycle for Vapor Compression Refrigeration

    Energy Technology Data Exchange (ETDEWEB)

    Mark J. Bergander

    2005-08-29

    The main objective of this project is to confirm on a well-instrumented prototype the theoretically derived claims of higher efficiency and coefficient of performance for geothermal heat pumps based on a new regenerative thermodynamic cycle as comparing to existing technology. In order to demonstrate the improved performance of the prototype, it will be compared to published parameters of commercially available geothermal heat pumps manufactured by US and foreign companies. Other objectives are to optimize the design parameters and to determine the economic viability of the new technology. Background (as stated in the proposal): The proposed technology closely relates to EERE mission by improving energy efficiency, bringing clean, reliable and affordable heating and cooling to the residential and commercial buildings and reducing greenhouse gases emission. It can provide the same amount of heating and cooling with considerably less use of electrical energy and consequently has a potential of reducing our nations dependence on foreign oil. The theoretical basis for the proposed thermodynamic cycle was previously developed and was originally called a dynamic equilibrium method. This theory considers the dynamic equations of state of the working fluid and proposes the methods for modification of T-S trajectories of adiabatic transformation by changing dynamic properties of gas, such as flow rate, speed and acceleration. The substance of this proposal is a thermodynamic cycle characterized by the regenerative use of the potential energy of two-phase flow expansion, which in traditional systems is lost in expansion valves. The essential new features of the process are: (1) The application of two-step throttling of the working fluid and two-step compression of its vapor phase. (2) Use of a compressor as the initial step compression and a jet device as a second step, where throttling and compression are combined. (3) Controlled ratio of a working fluid at the first and

  4. Automatic electromagnetic valve for previous vacuum

    International Nuclear Information System (INIS)

    Granados, C. E.; Martin, F.

    1959-01-01

    A valve which permits the maintenance of an installation vacuum when electric current fails is described. It also lets the air in the previous vacuum bomb to prevent the oil ascending in the vacuum tubes. (Author)

  5. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  6. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  7. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  8. Concomitant and previous osteoporotic vertebral fractures.

    Science.gov (United States)

    Lenski, Markus; Büser, Natalie; Scherer, Michael

    2017-04-01

    Background and purpose - Patients with osteoporosis who present with an acute onset of back pain often have multiple fractures on plain radiographs. Differentiation of an acute osteoporotic vertebral fracture (AOVF) from previous fractures is difficult. The aim of this study was to investigate the incidence of concomitant AOVFs and previous OVFs in patients with symptomatic AOVFs, and to identify risk factors for concomitant AOVFs. Patients and methods - This was a prospective epidemiological study based on the Registry of Pathological Osteoporotic Vertebral Fractures (REPAPORA) with 1,005 patients and 2,874 osteoporotic vertebral fractures, which has been running since February 1, 2006. Concomitant fractures are defined as at least 2 acute short-tau inversion recovery (STIR-) positive vertebral fractures that happen concomitantly. A previous fracture is a STIR-negative fracture at the time of initial diagnostics. Logistic regression was used to examine the influence of various variables on the incidence of concomitant fractures. Results - More than 99% of osteoporotic vertebral fractures occurred in the thoracic and lumbar spine. The incidence of concomitant fractures at the time of first patient contact was 26% and that of previous fractures was 60%. The odds ratio (OR) for concomitant fractures decreased with a higher number of previous fractures (OR =0.86; p = 0.03) and higher dual-energy X-ray absorptiometry T-score (OR =0.72; p = 0.003). Interpretation - Concomitant and previous osteoporotic vertebral fractures are common. Risk factors for concomitant fractures are a low T-score and a low number of previous vertebral fractures in cases of osteoporotic vertebral fracture. An MRI scan of the the complete thoracic and lumbar spine with STIR sequence reduces the risk of under-diagnosis and under-treatment.

  9. Analytical modeling of wet compression of gas turbine systems

    International Nuclear Information System (INIS)

    Kim, Kyoung Hoon; Ko, Hyung-Jong; Perez-Blanco, Horacio

    2011-01-01

    Evaporative gas turbine cycles (EvGT) are of importance to the power generation industry because of the potential of enhanced cycle efficiencies with moderate incremental cost. Humidification of the working fluid to result in evaporative cooling during compression is a key operation in these cycles. Previous simulations of this operation were carried out via numerical integration. The present work is aimed at modeling the wet-compression process with approximate analytical solutions instead. A thermodynamic analysis of the simultaneous heat and mass transfer processes that occur during evaporation is presented. The transient behavior of important variables in wet compression such as droplet diameter, droplet mass, gas and droplet temperature, and evaporation rate is investigated. The effects of system parameters on variables such as droplet evaporation time, compressor outlet temperature and input work are also considered. Results from this work exhibit good agreement with those of previous numerical work.

  10. An Intelligent Grey Wolf Optimizer Algorithm for Distributed Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Haiqiang Liu

    2018-01-01

    Full Text Available Distributed Compressed Sensing (DCS is an important research area of compressed sensing (CS. This paper aims at solving the Distributed Compressed Sensing (DCS problem based on mixed support model. In solving this problem, the previous proposed greedy pursuit algorithms easily fall into suboptimal solutions. In this paper, an intelligent grey wolf optimizer (GWO algorithm called DCS-GWO is proposed by combining GWO and q-thresholding algorithm. In DCS-GWO, the grey wolves’ positions are initialized by using the q-thresholding algorithm and updated by using the idea of GWO. Inheriting the global search ability of GWO, DCS-GWO is efficient in finding global optimum solution. The simulation results illustrate that DCS-GWO has better recovery performance than previous greedy pursuit algorithms at the expense of computational complexity.

  11. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  12. Uterine rupture without previous caesarean delivery

    DEFF Research Database (Denmark)

    Thisted, Dorthe L. A.; H. Mortensen, Laust; Krebs, Lone

    2015-01-01

    OBJECTIVE: To determine incidence and patient characteristics of women with uterine rupture during singleton births at term without a previous caesarean delivery. STUDY DESIGN: Population based cohort study. Women with term singleton birth, no record of previous caesarean delivery and planned...... vaginal delivery (n=611,803) were identified in the Danish Medical Birth Registry (1997-2008). Medical records from women recorded with uterine rupture during labour were reviewed to ascertain events of complete uterine rupture. Relative Risk (RR) and adjusted Relative Risk Ratio (aRR) of complete uterine...... rupture with 95% confidence intervals (95% CI) were ascertained according to characteristics of the women and of the delivery. RESULTS: We identified 20 cases with complete uterine rupture. The incidence of complete uterine rupture among women without previous caesarean delivery was about 3...

  13. Compressed gas fuel storage system

    Science.gov (United States)

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  14. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  15. Composition-Structure-Property Relations of Compressed Borosilicate Glasses

    Science.gov (United States)

    Svenson, Mouritz N.; Bechgaard, Tobias K.; Fuglsang, Søren D.; Pedersen, Rune H.; Tjell, Anders Ø.; Østergaard, Martin B.; Youngman, Randall E.; Mauro, John C.; Rzoska, Sylwester J.; Bockowski, Michal; Smedskjaer, Morten M.

    2014-08-01

    Hot isostatic compression is an interesting method for modifying the structure and properties of bulk inorganic glasses. However, the structural and topological origins of the pressure-induced changes in macroscopic properties are not yet well understood. In this study, we report on the pressure and composition dependences of density and micromechanical properties (hardness, crack resistance, and brittleness) of five soda-lime borosilicate glasses with constant modifier content, covering the extremes from Na-Ca borate to Na-Ca silicate end members. Compression experiments are performed at pressures ≤1.0 GPa at the glass transition temperature in order to allow processing of large samples with relevance for industrial applications. In line with previous reports, we find an increasing fraction of tetrahedral boron, density, and hardness but a decreasing crack resistance and brittleness upon isostatic compression. Interestingly, a strong linear correlation between plastic (irreversible) compressibility and initial trigonal boron content is demonstrated, as the trigonal boron units are the ones most disposed for structural and topological rearrangements upon network compaction. A linear correlation is also found between plastic compressibility and the relative change in hardness with pressure, which could indicate that the overall network densification is responsible for the increase in hardness. Finally, we find that the micromechanical properties exhibit significantly different composition dependences before and after pressurization. The findings have important implications for tailoring microscopic and macroscopic structures of glassy materials and thus their properties through the hot isostatic compression method.

  16. Geometric Results for Compressible Magnetohydrodynamics

    OpenAIRE

    Arter, Wayne

    2013-01-01

    Recently, compressible magnetohydrodynamics (MHD) has been elegantly formulated in terms of Lie derivatives. This paper exploits the geometrical properties of the Lie bracket to give new insights into the properties of compressible MHD behaviour, both with and without feedback of the magnetic field on the flow. These results are expected to be useful for the solution of MHD equations in both tokamak fusion experiments and space plasmas.

  17. Compressive spectroscopy by spectral modulation

    Science.gov (United States)

    Oiknine, Yaniv; August, Isaac; Stern, Adrian

    2017-05-01

    We review two compressive spectroscopy techniques based on modulation in the spectral domain that we have recently proposed. Both techniques achieve a compression ratio of approximately 10:1, however each with a different sensing mechanism. The first technique uses a liquid crystal cell as a tunable filter to modulate the spectral signal, and the second technique uses a Fabry-Perot etalon as a resonator. We overview the specific properties of each of the techniques.

  18. INTRODUCTION Previous reports have documented a high ...

    African Journals Online (AJOL)

    pregnancy if they were married, educated, had dental insurance, previously used dental services when not pregnant, or had knowledge about the possible connection between oral health and pregnancy outcome8. The purpose of this study was to explore the factors determining good oral hygiene among pregnant women ...

  19. Empowerment perceptions of educational managers from previously ...

    African Journals Online (AJOL)

    The perceptions of educational manag ers from previously disadvantaged primary and high schools in the Nelson Mandela Metropole regarding the issue of empowerment are outlined and the perceptions of educational managers in terms of various aspects of empowerment at different levels reflected. A literature study ...

  20. Management of choledocholithiasis after previous gastrectomy.

    Science.gov (United States)

    Anwer, S; Egan, R; Cross, N; Guru Naidu, S; Somasekar, K

    2017-09-01

    Common bile duct stones in patients with a previous gastrectomy can be a technical challenge because of the altered anatomy. This paper presents the successful management of two such patients using non-traditional techniques as conventional endoscopic retrograde cholangiopancreatography was not possible.

  1. Laboratory Grouping Based on Previous Courses.

    Science.gov (United States)

    Doemling, Donald B.; Bowman, Douglas C.

    1981-01-01

    In a five-year study, second-year human physiology students were grouped for laboratory according to previous physiology and laboratory experience. No significant differences in course or board examination performance were found, though correlations were found between predental grade-point averages and grouping. (MSE)

  2. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  3. 29 CFR 1917.154 - Compressed air.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  4. Previously unknown organomagnesium compounds in astrochemical context

    OpenAIRE

    Ruf, Alexander

    2018-01-01

    We describe the detection of dihydroxymagnesium carboxylates (CHOMg) in astrochemical context. CHOMg was detected in meteorites via ultrahigh-resolving chemical analytics and represents a novel, previously unreported chemical class. Thus, chemical stability was probed via quantum chemical computations, in combination with experimental fragmentation techniques. Results propose the putative formation of green-chemical OH-Grignard-type molecules and triggered fundamental questions within chemica...

  5. [Placental complications after a previous cesarean section].

    Science.gov (United States)

    Milosević, Jelena; Lilić, Vekoslav; Tasić, Marija; Radović-Janosević, Dragana; Stefanović, Milan; Antić, Vladimir

    2009-01-01

    The incidence of cesarean section has been rising in the past 50 years. With the increased number of cesarean sections, the number of pregnancies with the previous cesarean section rises as well. The aim of this study was to establish the influence of the previous cesarean section on the development of placental complications: placenta previa, placental abruption and placenta accreta, as well as to determine the influence of the number of previous cesarean sections on the complication development. The research was conducted at the Clinic of Gynecology and Obstetrics in Nis covering 10-year-period (from 1995 to 2005) with 32358 deliveries, 1280 deliveries after a previous cesarean section, 131 cases of placenta previa and 118 cases of placental abruption. The experimental groups was presented by the cases of placenta previa or placental abruption with prior cesarean section in obstetrics history, opposite to the control group having the same conditions but without a cesarean section in medical history. The incidence of placenta previa in the control group was 0.33%, opposite to the 1.86% incidence after one cesarean section (pcesarean sections and as high as 14.28% after three cesarean sections in obstetric history. Placental abruption was recorded as placental complication in 0.33% pregnancies in the control group, while its incidence was 1.02% after one cesarean section (pcesarean sections. The difference in the incidence of intrapartal hysterectomy between the group with prior cesarean section (0.86%) and without it (0.006%) shows a high statistical significance (pcesarean section is an important risk factor for the development of placental complications.

  6. Image Quality Assessment for Different Wavelet Compression Techniques in a Visual Communication Framework

    Directory of Open Access Journals (Sweden)

    Nuha A. S. Alwan

    2013-01-01

    Full Text Available Images with subband coding and threshold wavelet compression are transmitted over a Rayleigh communication channel with additive white Gaussian noise (AWGN, after quantization and 16-QAM modulation. A comparison is made between these two types of compression using both mean square error (MSE and structural similarity (SSIM image quality assessment (IQA criteria applied to the reconstructed image at the receiver. The two methods yielded comparable SSIM but different MSE measures. In this work, we justify our results which support previous findings in the literature that the MSE between two images is not indicative of structural similarity or the visibility of errors. It is found that it is difficult to reduce the pointwise errors in subband-compressed images (higher MSE. However, the compressed images provide comparable SSIM or perceived quality for both types of compression provided that the retained energy after compression is the same.

  7. 76 FR 4338 - Research and Development Strategies for Compressed & Cryo-Compressed Hydrogen Storage Workshops

    Science.gov (United States)

    2011-01-25

    ... DEPARTMENT OF ENERGY Research and Development Strategies for Compressed & Cryo- Compressed... Fuel Cell Technologies Program, will be hosting two days of workshops on compressed and cryo-compressed... perspectives, and overviews of carbon fiber development and recent costs analyses. The cryo-compressed hydrogen...

  8. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  9. A higher chest compression rate may be necessary for metronome-guided cardiopulmonary resuscitation.

    Science.gov (United States)

    Chung, Tae Nyoung; Kim, Sun Wook; You, Je Sung; Cho, Young Soon; Chung, Sung Phil; Park, Incheol

    2012-01-01

    Metronome guidance is a simple and economical feedback system for guiding cardiopulmonary resuscitation (CPR). However, a recent study showed that metronome guidance reduced the depth of chest compression. The results of previous studies suggest that a higher chest compression rate is associated with a better CPR outcome as compared with a lower chest compression rate, irrespective of metronome use. Based on this finding, we hypothesized that a lower chest compression rate promotes a reduction in chest compression depth in the recent study rather than metronome use itself. One minute of chest compression-only CPR was performed following the metronome sound played at 1 of 4 different rates: 80, 100, 120, and 140 ticks/min. Average compression depths (ACDs) and duty cycles were compared using repeated measures analysis of variance, and the values in the absence and presence of metronome guidance were compared. Both the ACD and duty cycle increased when the metronome rate increased (P = .017, CPR procedures following the metronome rates of 80 and 100 ticks/min were significantly lower than those for the procedures without metronome guidance. The ACD and duty cyle for chest compression increase as the metronome rate increases during metronome-guided CPR. A higher rate of chest compression is necessary for metronome-guided CPR to prevent suboptimal quality of chest compression. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Delivery of Compression Therapy for Venous Leg Ulcers

    DEFF Research Database (Denmark)

    Zarchi, Kian; Jemec, Gregor B E

    2014-01-01

    IMPORTANCE: Despite the documented effect of compression therapy in clinical studies and its widespread prescription, treatment of venous leg ulcers is often prolonged and recurrence rates high. Data on provided compression therapy are limited. OBJECTIVE: To assess whether home care nurses achieve......; and a multilayer, 2-component bandage, as well as, association between achievement of optimal pressure and years in the profession, attendance at wound care educational programs, previous work experience, and confidence in bandaging ability. RESULTS: A substantial variation in the exerted pressure was found...

  11. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  12. Plasma heating by adiabatic compression

    International Nuclear Information System (INIS)

    Ellis, R.A. Jr.

    1972-01-01

    These two lectures will cover the following three topics: (i) The application of adiabatic compression to toroidal devices is reviewed. The special case of adiabatic compression in tokamaks is considered in more detail, including a discussion of the equilibrium, scaling laws, and heating effects. (ii) The ATC (Adiabatic Toroidal Compressor) device which was completed in May 1972, is described in detail. Compression of a tokamak plasma across a static toroidal field is studied in this device. The device is designed to produce a pre-compression plasma with a major radius of 17 cm, toroidal field of 20 kG, and current of 90 kA. The compression leads to a plasma with major radius of 38 cm and minor radius of 10 cm. Scaling laws imply a density increase of a factor 6, temperature increase of a factor 3, and current increase of a factor 2.4. An additional feature of ATC is that it is a large tokamak which operates without a copper shell. (iii) Data which show that the expected MHD behavior is largely observed is presented and discussed. (U.S.)

  13. Compressibility effects in planar wakes

    Science.gov (United States)

    Hickey, Jean-Pierre; Hussain, Fazle; Wu, Xiaohua

    2010-11-01

    Far-field, temporally evolving planar wakes are studied by DNS to evaluate the effect of compressibility on the flow. A high-order predictor-corrector code was developed and fully validated against canonical compressible test cases. In this study, wake simulations are performed at constant Reynolds number for three different Mach numbers: Ma= 0.2, 0.8 and 1.2. The domain is doubly periodic with a non-reflecting boundary in the cross-flow and is initialized by a randomly perturbed laminar profile. The compressibility of the flow modifies the observed structures which show greater three-dimensionality. A self-similar period develops in which the square of the wake half-width increase linearly with time and the Reynolds stress statistics at various times collapse using proper scaling parameters. The growth-rate increases with increasing compressibility of the flow: an observation which is substantiated by experimental results but is in stark contrast with the high-speed mixing-layer. As the growth-rate is related to the mixing ability of the flow, the impact of compressibility is of fundamental importance. Therefore, we seek an explanation of the modified growth-rate by investigating the turbulent kinetic energy equation. From the analysis, it can be conjectured that the pressure-strain term might play a role in the modified growth-rate.

  14. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  15. Correlations between quality indexes of chest compression.

    Science.gov (United States)

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  16. Premixed autoignition in compressible turbulence

    Science.gov (United States)

    Konduri, Aditya; Kolla, Hemanth; Krisman, Alexander; Chen, Jacqueline

    2016-11-01

    Prediction of chemical ignition delay in an autoignition process is critical in combustion systems like compression ignition engines and gas turbines. Often, ignition delay times measured in simple homogeneous experiments or homogeneous calculations are not representative of actual autoignition processes in complex turbulent flows. This is due the presence of turbulent mixing which results in fluctuations in thermodynamic properties as well as chemical composition. In the present study the effect of fluctuations of thermodynamic variables on the ignition delay is quantified with direct numerical simulations of compressible isotropic turbulence. A premixed syngas-air mixture is used to remove the effects of inhomogeneity in the chemical composition. Preliminary results show a significant spatial variation in the ignition delay time. We analyze the topology of autoignition kernels and identify the influence of extreme events resulting from compressibility and intermittency. The dependence of ignition delay time on Reynolds and turbulent Mach numbers is also quantified. Supported by Basic Energy Sciences, Dept of Energy, United States.

  17. Compressive behavior of fine sand.

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Bradley E. (Air Force Research Laboratory, Eglin, FL); Kabir, Md. E. (Purdue University, West Lafayette, IN); Song, Bo; Chen, Wayne (Purdue University, West Lafayette, IN)

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  18. Compressive sensing and hyperspectral imaging

    Science.gov (United States)

    Barducci, A.; Guzzi, D.; Lastri, C.; Marcoionni, P.; Nardino, V.; Pippi, I.

    2017-11-01

    Compressive sensing (sampling) is a novel technology and science domain that exploits the option to sample radiometric and spectroscopic signals at a lower sampling rate than the one dictated by the traditional theory of ideal sampling. In the paper some general concepts and characteristics regarding the use of compressive sampling in instruments devoted to Earth observation is discussed. The remotely sensed data is assumed to be constituted by sampled images collected by a passive device in the optical spectral range from the visible up to the thermal infrared, with possible spectral discrimination ability, e.g. hyperspectral imaging. According to recent investigations, compressive sensing necessarily employs a signal multiplexing architecture, which in spite of traditional expectations originates a significant SNR disadvantage.

  19. Rectal perforation by compressed air.

    Science.gov (United States)

    Park, Young Jin

    2017-07-01

    As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed.

  20. Compressibility of rotating black holes

    International Nuclear Information System (INIS)

    Dolan, Brian P.

    2011-01-01

    Interpreting the cosmological constant as a pressure, whose thermodynamically conjugate variable is a volume, modifies the first law of black hole thermodynamics. Properties of the resulting thermodynamic volume are investigated: the compressibility and the speed of sound of the black hole are derived in the case of nonpositive cosmological constant. The adiabatic compressibility vanishes for a nonrotating black hole and is maximal in the extremal case--comparable with, but still less than, that of a cold neutron star. A speed of sound v s is associated with the adiabatic compressibility, which is equal to c for a nonrotating black hole and decreases as the angular momentum is increased. An extremal black hole has v s 2 =0.9 c 2 when the cosmological constant vanishes, and more generally v s is bounded below by c/√(2).

  1. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  2. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  3. Dynamic compressibility of air in porous structures at audible frequencies

    DEFF Research Database (Denmark)

    Lafarge, Denis; Lemarinier, Pavel; Allard, Jean F.

    1997-01-01

    Measurements of dynamic compressibility of air-filled porous sound-absorbing materials are compared with predictions involving two parametere, the static thermal permeability k'_0 and the thermal characteristic dimension GAMMA'. Emphasis on the notion of dynamic and static thermal permeability...... of the viscous forces. Using both parameters, a simple model is constructed for the dynamic thermal permeability k', which is completely analogous to the Johnson et al. [J. Fluid Mech. vol. 176, 379 (1987)] model of dynamic viscous permeability k. The resultant modeling of dynamic compressibility provides...... predictions which are closer to the experimental results than the previously used simpler model where the compressibility is the same as in identical circular cross-sectional shaped pores, or distributions of slits, related to a given GAMMA'....

  4. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest

    NARCIS (Netherlands)

    Monsieurs, Koenraad G.; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F.; Calle, Paul A.

    2012-01-01

    Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

  5. Induced vaginal birth after previous caesarean section

    Directory of Open Access Journals (Sweden)

    Akylbek Tussupkaliyev

    2016-11-01

    Full Text Available Introduction The rate of operative birth by Caesarean section is constantly rising. In Kazakhstan, it reaches 27 per cent. Research data confirm that the percentage of successful vaginal births after previous Caesarean section is 50–70 per cent. How safe the induction of vaginal birth after Caesarean (VBAC remains unclear. Methodology The studied techniques of labour induction were amniotomy of the foetal bladder with the vulsellum ramus, intravaginal administration of E1 prostaglandin (Misoprostol, and intravenous infusion of Oxytocin-Richter. The assessment of rediness of parturient canals was conducted by Bishop’s score; the labour course was assessed by a partogram. The effectiveness of labour induction techniques was assessed by the number of administered doses, the time of onset of regular labour, the course of labour and the postpartum period and the presence of complications, and the course of the early neonatal period, which implied the assessment of the child’s condition, described in the newborn development record. The foetus was assessed by medical ultrasound and antenatal and intranatal cardiotocography (CTG. Obtained results were analysed with SAS statistical processing software. Results The overall percentage of successful births with intravaginal administration of Misoprostol was 93 per cent (83 of cases. This percentage was higher than in the amniotomy group (relative risk (RR 11.7 and was similar to the oxytocin group (RR 0.83. Amniotomy was effective in 54 per cent (39 of cases, when it induced regular labour. Intravenous oxytocin infusion was effective in 94 per cent (89 of cases. This percentage was higher than that with amniotomy (RR 12.5. Conclusions The success of vaginal delivery after previous Caesarean section can be achieved in almost 70 per cent of cases. At that, labour induction does not decrease this indicator and remains within population boundaries.

  6. Ammonium azide under hydrostatic compression

    Science.gov (United States)

    Landerville, A. C.; Steele, B. A.; Oleynik, I. I.

    2014-05-01

    The properties of ammonium azide NH4N3 upon compression were investigated using first-principles density functional theory. The equation of state was calculated and the mechanism of a phase transition experimentally observed at 3.3 GPa is elucidated. Novel polymorphs of NH4N3 were found using a simple structure search algorithm employing random atomic displacements upon static compression. The structures of three new polymorphs, labelled as B, C, and D, are similar to those of other metal azides.

  7. Lossless Compression of Digital Images

    DEFF Research Database (Denmark)

    Martins, Bo

    are constructed by this principle. A multi-pass free tree coding scheme produces excellent compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...... version that is substantially faster than its precursorsand brings it close to the multi-pass coders in compression performance.Handprinted characters are of unequal complexity; recent work by Singer and Tishby demonstrates that utilizing the physiological process of writing one can synthesize cursive...

  8. Methods for Distributed Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Dennis Sundman

    2013-12-01

    Full Text Available Compressed sensing is a thriving research field covering a class of problems where a large sparse signal is reconstructed from a few random measurements. In the presence of several sensor nodes measuring correlated sparse signals, improvements in terms of recovery quality or the requirement for a fewer number of local measurements can be expected if the nodes cooperate. In this paper, we provide an overview of the current literature regarding distributed compressed sensing; in particular, we discuss aspects of network topologies, signal models and recovery algorithms.

  9. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  10. Moving image compression and generalization capability of constructive neural networks

    Science.gov (United States)

    Ma, Liying; Khorasani, Khashayar

    2001-03-01

    To date numerous techniques have been proposed to compress digital images to ease their storage and transmission over communication channels. Recently, a number of image compression algorithms using Neural Networks NNs have been developed. Particularly, several constructive feed-forward neural networks FNNs have been proposed by researchers for image compression, and promising results have been reported. At the previous SPIE AeroSense conference 2000, we proposed to use a constructive One-Hidden-Layer Feedforward Neural Network OHL-FNN for compressing digital images. In this paper, we first investigate the generalization capability of the proposed OHL-FNN in the presence of additive noise for network training and/ or generalization. Extensive experimental results for different scenarios are presented. It is revealed that the constructive OHL-FNN is not as robust to additive noise in input image as expected. Next, the constructive OHL-FNN is applied to moving images, video sequences. The first, or other specified frame in a moving image sequence is used to train the network. The remaining moving images that follow are then generalized/compressed by this trained network. Three types of correlation-like criteria measuring the similarity of any two images are introduced. The relationship between the generalization capability of the constructed net and the similarity of images is investigated in some detail. It is shown that the constructive OHL-FNN is promising even for changing images such as those extracted from a football game.

  11. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  12. Compression of Probabilistic XML documents

    NARCIS (Netherlands)

    Veldman, Irma

    2009-01-01

    Probabilistic XML (PXML) files resulting from data integration can become extremely large, which is undesired. For XML there are several techniques available to compress the document and since probabilistic XML is in fact (a special form of) XML, it might benefit from these methods even more. In

  13. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  14. Force balancing in mammographic compression

    NARCIS (Netherlands)

    Branderhorst, W.; de Groot, J. E.; Neeter, L. M. F. H.; van Lier, M. G. J. T. B.; Neeleman, C.; den Heeten, G. J.; Grimbergen, C. A.

    2016-01-01

    In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body

  15. Compressing spatio-temporal trajectories

    DEFF Research Database (Denmark)

    Gudmundsson, Joachim; Katajainen, Jyrki; Merrick, Damian

    2009-01-01

    A trajectory is a sequence of locations, each associated with a timestamp, describing the movement of a point. Trajectory data is becoming increasingly available and the size of recorded trajectories is getting larger. In this paper we study the problem of compressing planar trajectories such tha...

  16. Nonlinear compression of optical solitons

    Indian Academy of Sciences (India)

    pulse area can be conserved by the inclusion of gain (or loss) and phase modulation effects. Keywords. Optical solitons; bright and dark solitons; nonlinear compression; phase modulation; fibre amplification; loss. PACS Nos 42.81. Dp; 02.30 Jr; 04.30 Nk. 1. Introduction. The term soliton refers to special kinds of waves that ...

  17. Grid-free compressive beamforming

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Gerstoft, Peter

    2015-01-01

    The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high...

  18. Entropy, Coding and Data Compression

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 9. Entropy, Coding and Data Compression. S Natarajan. General Article Volume 6 Issue 9 September 2001 pp 35-45. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/09/0035-0045 ...

  19. Incremental data compression -extended abstract-

    NARCIS (Netherlands)

    Jeuring, J.T.

    1992-01-01

    Data may be compressed using textual substitution. Textual substitution identifies repeated substrings and replaces some or all substrings by pointers to another copy. We construct an incremental algorithm for a specific textual substitution method: coding a text with respect to a dictionary. With

  20. Relationship between the Compressive and Tensile Strength of Recycled Concrete

    International Nuclear Information System (INIS)

    El Dalati, R.; Haddad, S.; Matar, P.; Chehade, F.H

    2011-01-01

    Concrete recycling consists of crushing the concrete provided by demolishing the old constructions, and of using the resulted small pieces as aggregates in the new concrete compositions. The resulted aggregates are called recycled aggregates and the new mix of concrete containing a percentage of recycled aggregates is called recycled concrete. Our previous researches have indicated the optimal percentages of recycled aggregates to be used for different cases of recycled concrete related to the original aggregates nature. All results have shown that the concrete compressive strength is significantly reduced when using recycled aggregates. In order to obtain realistic values of compressive strength, some tests have been carried out by adding water-reducer plasticizer and a specified additional quantity of cement. The results have shown that for a limited range of plasticizer percentage, and a fixed value of additional cement, the compressive strength has reached reasonable value. This paper treats of the effect of using recycled aggregates on the tensile strength of concrete, where concrete results from the special composition defined by our previous work. The aim is to determine the relationship between the compressive and tensile strength of recycled concrete. (author)

  1. Computable performance guarantees for compressed sensing matrices.

    Science.gov (United States)

    Cho, Myung; Vijay Mishra, Kumar; Xu, Weiyu

    2018-01-01

    The null space condition for ℓ 1 minimization in compressed sensing is a necessary and sufficient condition on the sensing matrices under which a sparse signal can be uniquely recovered from the observation data via ℓ 1 minimization. However, verifying the null space condition is known to be computationally challenging. Most of the existing methods can provide only upper and lower bounds on the proportion parameter that characterizes the null space condition. In this paper, we propose new polynomial-time algorithms to establish upper bounds of the proportion parameter. We leverage on these techniques to find upper bounds and further develop a new procedure-tree search algorithm-that is able to precisely and quickly verify the null space condition. Numerical experiments show that the execution speed and accuracy of the results obtained from our methods far exceed those of the previous methods which rely on linear programming (LP) relaxation and semidefinite programming (SDP).

  2. Compressive Failure Mechanisms in Layered Materials

    DEFF Research Database (Denmark)

    Sørensen, Kim Dalsten

    Two important failure modes in fiber reinforced composite materials in cluding layers and laminates occur under loading conditions dominated by compression in the layer direction. These two distinctly different failure modes are 1. buckling driven delamination 2. failure by strain localization...... into kink bands. The present thesis falls into two parts dealing with the two failure modes. In the first part of the thesis the effects of system geometry on buckling driven delamination is investigated. Previous work has focused on buckling driven delamination of surface layers on flat substrates...... of parameters for which the interface crack remains open and as a consequence a study of the effects of crack closure has been carried out. The other part of the thesis analyzes failure by kink band formation. More specifically a constitutive model developed to study kink band formation has been implemented...

  3. Serratia liquefaciens Infection of a Previously Excluded Popliteal Artery Aneurysm

    Directory of Open Access Journals (Sweden)

    A. Coelho

    Full Text Available : Introduction: Popliteal artery aneurysms (PAAs are rare in the general population, but they account for nearly 70% of peripheral arterial aneurysms. There are several possible surgical approaches including exclusion of the aneurysm and bypass grafting, or endoaneurysmorrhaphy and interposition of a prosthetic conduit. The outcomes following the first approach are favorable, but persistent blood flow in the aneurysm sac has been documented in up to one third of patients in the early post-operative setting. Complications from incompletely excluded aneurysms include aneurysm enlargement, local compression symptoms, and sac rupture. Notably infection of a previously excluded and bypassed PAA is rare. This is the third reported case of PAA infection after exclusion and bypass grafting and the first due to Serratia liquefaciens. Methods: Relevant medical data were collected from the hospital database. Results: This case report describes a 54 year old male patient, diagnosed with acute limb ischaemia due to a thrombosed PAA, submitted to emergency surgery with exclusion and venous bypass. A below the knee amputation was necessary 3 months later. Patient follow-up was lost until 7 years following surgical repair, when he was diagnosed with aneurysm sac infection with skin fistulisation. He had recently been diagnosed with alcoholic hepatic cirrhosis Child–Pugh Class B. The patient was successfully treated by aneurysm resection, soft tissue debridement and systemic antibiotics. Conclusion: PAA infection is a rare complication after exclusion and bypass procedures but should be considered in any patient with evidence of local or systemic infection. When a PAA infection is diagnosed, aneurysmectomy, local debridement, and intravenous antibiotic therapy are recommended. The “gold standard” method of PAA repair remains controversial. PAA excision or endoaneurysmorrhaphy avoids complications from incompletely excluded aneurysms, but is associated with

  4. Semantic Source Coding for Flexible Lossy Image Compression

    National Research Council Canada - National Science Library

    Phoha, Shashi; Schmiedekamp, Mendel

    2007-01-01

    Semantic Source Coding for Lossy Video Compression investigates methods for Mission-oriented lossy image compression, by developing methods to use different compression levels for different portions...

  5. The task of control digital image compression

    OpenAIRE

    TASHMANOV E.B.; МАМАTOV М.S.

    2014-01-01

    In this paper we consider the relationship of control tasks and image compression losses. The main idea of this approach is to allocate structural lines simplified image and further compress the selected data

  6. Chest compression pauses during defibrillation attempts

    NARCIS (Netherlands)

    Deakin, Charles D.; Koster, Rudolph W.

    2016-01-01

    Purpose of review This article summarizes current knowledge of the causes and consequences of interruption of chest compressions during cardiopulmonary resuscitation. Recent findings Pauses in chest compressions occur during analysis of the heart rhythm, delivery of ventilation, interventions such

  7. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  8. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    Energy Technology Data Exchange (ETDEWEB)

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  9. The Effect of Al on the Compressibility of Silicate Perovskite

    Science.gov (United States)

    Walter, M. J.; Kubo, A.; Yoshino, T.; Koga, K. T.; Ohishi, Y.

    2003-12-01

    Experimental data on compressibility of aluminous silicate perovskite show widely disparate results. Several studies show that Al causes a dramatic increase in compressibility1-3, while another study indicates a mild decrease in compressibility4. Here we report new results for the effect of Al on the room-temperature compressibility of perovskite using in situ X-ray diffraction in the diamond anvil cell from 30 to 100 GPa. We studied compressibility of perovskite in the system MgSiO3-Al2O3 in compositions with 0 to 25 mol% Al. Perovskite was synthesized from starting glasses using laser-heating in the DAC, with KBr as a pressure medium. Diffraction patterns were obtained using monochromatic radiation and an imaging plate detector at beamline BL10XU, SPring8, Japan. Addition of Al into the perovskite structure causes systematic increases in orthorhombic distortion and unit cell volume at ambient conditions (V0). Compression of the perovskite unit cell is anisotropic, with the a axis about 25% and 3% more compressive than the b and c axes, respectively. The magnitude of orthorhombic distortion increases with pressure, but aluminous perovskite remains stable to at least 100 GPa. Our results show that Al causes only a mild increase in compressibility, with the bulk modulus (K0) decreasing at a rate of 0.7 GPa/0.01 XAl. This increase in compressibility is consistent with recent ab initio calculations if Al mixes into both the 6- and 8-coordinated sites by coupled substitution5, where 2 Al3+ = Mg2+ + Si4+. Our results together with those of [4] indicate that this substitution mechanism predominates throughout the lower mantle. Previous mineralogic models indicating the upper and lower mantle are compositionally similar in terms of major elements remain effectively unchanged because solution of 5 mol% Al into perovskite has a minor effect on density. 1. Zhang & Weidner (1999). Science 284, 782-784. 2. Kubo et al. (2000) Proc. Jap. Acad. 76B, 103-107. 3. Daniel et al

  10. Crypto-Compression Using TEA's Algorithm and a RLC Compression

    OpenAIRE

    Borie , Jean-Claude; Puech , William; Dumas , Michel

    2004-01-01

    International audience; In this paper, we discuss the secure of transfering of medical images. We propose two cryptosystems, the first one is a very fast algorithm by block, the TEA (Tiny Encryption Algo- rithm) and the second is a stream cipher based on Vigenere’s ciphering. We show differences existing between them, especially concerning the combination of the image encryption and the compression. Results applied to medical images are given to illustrate the two methods.; Dans cet article, ...

  11. On Normalized Compression Distance and Large Malware

    OpenAIRE

    Borbely, Rebecca Schuller

    2015-01-01

    Normalized Compression Distance (NCD) is a popular tool that uses compression algorithms to cluster and classify data in a wide range of applications. Existing discussions of NCD's theoretical merit rely on certain theoretical properties of compression algorithms. However, we demonstrate that many popular compression algorithms don't seem to satisfy these theoretical properties. We explore the relationship between some of these properties and file size, demonstrating that this theoretical pro...

  12. Size Adaptive Region Based Huffman Compression Technique

    OpenAIRE

    Nandi, Utpal; Mandal, Jyotsna Kumar

    2014-01-01

    A loss-less compression technique is proposed which uses a variable length Region formation technique to divide the input file into a number of variable length regions. Huffman codes are obtained for entire file after formation of regions. Symbols of each region are compressed one by one. Comparisons are made among proposed technique, Region Based Huffman compression technique and classical Huffman technique. The proposed technique offers better compression ratio for some files than other two.

  13. Cascaded quadratic soliton compression at 800 nm

    DEFF Research Database (Denmark)

    Bache, Morten; Bang, Ole; Moses, Jeffrey

    2007-01-01

    We study soliton compression in quadratic nonlinear materials at 800 nm, where group-velocity mismatch dominates. We develop a nonlocal theory showing that efficient compression depends strongly on characteristic nonlocal time scales related to pulse dispersion.......We study soliton compression in quadratic nonlinear materials at 800 nm, where group-velocity mismatch dominates. We develop a nonlocal theory showing that efficient compression depends strongly on characteristic nonlocal time scales related to pulse dispersion....

  14. Still image and video compression with MATLAB

    CERN Document Server

    Thyagarajan, K

    2010-01-01

    This book describes the principles of image and video compression techniques and introduces current and popular compression standards, such as the MPEG series. Derivations of relevant compression algorithms are developed in an easy-to-follow fashion. Numerous examples are provided in each chapter to illustrate the concepts. The book includes complementary software written in MATLAB SIMULINK to give readers hands-on experience in using and applying various video compression methods. Readers can enhance the software by including their own algorithms.

  15. Relationship between the edgewise compression strength of ...

    African Journals Online (AJOL)

    The results of this study were used to determine the linear regression constants in the Maltenfort model by correlating the measured board edgewise compression strength (ECT) with the predicted strength, using the paper components' compression strengths, measured with the short-span compression test (SCT) and the ...

  16. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  17. H.264/AVC Video Compression on Smartphones

    Science.gov (United States)

    Sharabayko, M. P.; Markov, N. G.

    2017-01-01

    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  18. Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    Science.gov (United States)

    Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; hide

    2015-01-01

    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.

  19. The VELOCE pulsed power generator for isentropic compression experiments

    Energy Technology Data Exchange (ETDEWEB)

    Ao, Tommy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Asay, James Russell [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Chantrenne, Sophie J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Hickman, Randall John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Willis, Michael David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Shay, Andrew W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Grine-Jones, Suzi A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Hall, Clint Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Baer, Melvin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Engineering Sciences Center

    2007-12-01

    Veloce is a medium-voltage, high-current, compact pulsed power generator developed for isentropic and shock compression experiments. Because of its increased availability and ease of operation, Veloce is well suited for studying isentropic compression experiments (ICE) in much greater detail than previously allowed with larger pulsed power machines such as the Z accelerator. Since the compact pulsed power technology used for dynamic material experiments has not been previously used, it is necessary to examine several key issues to ensure that accurate results are obtained. In the present experiments, issues such as panel and sample preparation, uniformity of loading, and edge effects were extensively examined. In addition, magnetohydrodynamic (MHD) simulations using the ALEGRA code were performed to interpret the experimental results and to design improved sample/panel configurations. Examples of recent ICE studies on aluminum are presented.

  20. Using autoencoders for mammogram compression.

    Science.gov (United States)

    Tan, Chun Chet; Eswaran, Chikkannan

    2011-02-01

    This paper presents the results obtained for medical image compression using autoencoder neural networks. Since mammograms (medical images) are usually of big sizes, training of autoencoders becomes extremely tedious and difficult if the whole image is used for training. We show in this paper that the autoencoders can be trained successfully by using image patches instead of the whole image. The compression performances of different types of autoencoders are compared based on two parameters, namely mean square error and structural similarity index. It is found from the experimental results that the autoencoder which does not use Restricted Boltzmann Machine pre-training yields better results than those which use this pre-training method.

  1. Instability of ties in compression

    DEFF Research Database (Denmark)

    Buch-Hansen, Thomas Cornelius

    2013-01-01

    exact instability solutions are complex to derive, not to mention the extra complexity introducing dimensional instability from the temperature gradients. Using an inverse variable substitution and comparing an exact theory with an analytical instability solution a method to design tie...... the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since......-connectors in cavity walls was developed. The method takes into account constraint conditions limiting the free length of the wall tie, and the instability in case of pure compression which gives an optimal load bearing capacity. The model is illustrated with examples from praxis....

  2. Bronchoscopic guidance of endovascular stenting limits airway compression.

    Science.gov (United States)

    Ebrahim, Mohammad; Hagood, James; Moore, John; El-Said, Howaida

    2015-04-01

    Bronchial compression as a result of pulmonary artery and aortic arch stenting may cause significant respiratory distress. We set out to limit airway narrowing by endovascular stenting, by using simultaneous flexible bronchoscopy and graduated balloon stent dilatation, or balloon angioplasty to determine maximum safe stent diameter. Between August 2010 and August 2013, patients with suspected airway compression by adjacent vascular structures, underwent CT or a 3D rotational angiogram to evaluate the relationship between the airway and the blood vessels. If these studies showed close proximity of the stenosed vessel and the airway, simultaneous bronchoscopy and graduated stent re-dilation or graduated balloon angioplasty were performed. Five simultaneous bronchoscopy and interventional catheterization procedures were performed in four patients. Median age/weight was 33 (range 9-49) months and 14 (range 7.6-24) kg, respectively. Three had hypoplastic left heart syndrome, and one had coarctation of the aorta (CoA). All had confirmed or suspected left main stem bronchial compression. In three procedures, serial balloon dilatation of a previously placed stent in the CoA was performed and bronchoscopy was used to determine the safest largest diameter. In the other two procedures, balloon testing with simultaneous bronchoscopy was performed to determine the stent size that would limit compression of the adjacent airway. In all cases, simultaneous bronchoscopy allowed selection of an ideal caliber of the stent that optimized vessel diameter while minimizing compression of the adjacent airway. In cases at risk for airway compromise, flexible bronchoscopy is a useful tool to guide endovascular stenting. Maximum safe stent diameter can be determined without risking catastrophic airway compression. © 2014 Wiley Periodicals, Inc.

  3. Modelling and simulation of the compressible turbulence in supersonic shear flows

    International Nuclear Information System (INIS)

    Guezengar, Dominique

    1997-02-01

    This research thesis addresses the modelling of some specific physical problems of fluid mechanics: compressibility (issue of mixing layers), large variations of volumetric mass (boundary layers), and anisotropy (compression ramps). After a presentation of the chosen physical modelling and numerical approximation, the author pays attention to flows at the vicinity of a wall, and to boundary conditions. The next part addresses existing compressibility models and their application to the calculation of supersonic mixing layers. A critical assessment is also performed through calculations of boundary layers and of compression ramps. The next part addresses problems related to large variations of volumetric mass which are not taken by compressibility models into account. A modification is thus proposed for the diffusion term, and is tested for the case of supersonic boundary layers and of mixing layers with high density rates. Finally, anisotropy effects are addressed through the implementation of Explicit Algebraic Stress k-omega Turbulence models (EARSM), and their tests on previously studied cases [fr

  4. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  5. Studies on diametral compressive strength

    International Nuclear Information System (INIS)

    Awaji, Hideo; Sato, Sennosuke.

    1978-01-01

    A new approach to the diametral compressive tests using circular anvils is proposed on the basis of the analytical study given in the preceding paper. In this approach, the collapse at the contact edges can be avoided. The experimental results obtained by this method for several kinds of graphite and Italian Ondagata light marble are compared with those of the uniaxial tensile strength, and the discrepancy is discussed for a wide range of brittle materials. (auth.)

  6. [Compression treatment for burned skin].

    Science.gov (United States)

    Jaafar, Fadhel; Lassoued, Mohamed A; Sahnoun, Mahdi; Sfar, Souad; Cheikhrouhou, Morched

    2012-02-01

    The regularity of a compressive knit is defined as its ability to perform its function in a burnt skin. This property is essential to avoid the phenomenon of rejection of the material or toxicity problems But: Make knits biocompatible with high burnet of human skin. We fabric knits of elastic material. To ensure good adhesion to the skin, we made elastic material, typically a tight loop knitted. The Length of yarn absorbed by stitch and the raw matter are changed with each sample. The physical properties of each sample are measured and compared. Surface modifications are made to these samples by impregnation of microcapsules based on jojoba oil. Knits are compressif, elastic in all directions, light, thin, comfortable, and washable for hygiene issues. In addition, the washing can find their compressive properties. The Jojoba Oil microcapsules hydrated the human burnet skin. This moisturizer is used to the firmness of the wound and it gives flexibility to the skin. Compressive Knits are biocompatible with burnet skin. The mixture of natural and synthetic fibers is irreplaceable in terms comfort and regularity.

  7. Splanchnic Compression Improves the Efficacy of Compression Stockings to Prevent Orthostatic Intolerance

    Science.gov (United States)

    Platts, Steven H.; Brown, A. K.; Lee, S. M.; Stenger, M. B.

    2009-01-01

    Purpose: Post-spaceflight orthostatic intolerance (OI) is observed in 20-30% of astronauts. Previous data from our laboratory suggests that this is largely a result of decreased venous return. Currently, NASA astronauts wear an anti-gravity suit (AGS) which consists of inflatable air bladders over the calves, thighs and abdomen, typically pressurized from 26 to 78 mmHg. We recently determined that, thigh-high graded compression stockings (JOBST , 55 mmHg at ankle, 6 mmHg at top of thigh) were effective, though to a lesser degree than the AGS. The purpose of this study was to evaluate the addition of splanchnic compression to prevent orthostatic intolerance. Methods: Ten healthy volunteers (6M, 4F) participated in three 80 head-up tilts on separate days while (1) normovolemic (2) hypovolemic w/ breast-high compression stockings (BS)(JOBST(R), 55 mmHg at the ankle, 6 mmHg at top of thigh, 12 mmHg over abdomen) (3) hypovolemic w/o stockings. Hypovolemia was induced by IV infusion of furosemide (0.5 mg/kg) and 48 hrs of a low salt diet to simulate plasma volume loss following space flight. Hypovolemic testing occurred 24 and 48 hrs after furosemide. One-way repeated measures ANOVA, with Bonferroni corrections, was used to test for differences in blood pressure and heart rate responses to head-up tilt, stand times were compared using a Kaplan-Meyer survival analysis. Results: BS were effective in preventing OI and presyncope in hypovolemic test subjects ( p = 0.015). BS prevented the decrease in systolic blood pressure seen during tilt in normovolemia (p < 0.001) and hypovolemia w/o countermeasure (p = 0.005). BS also prevented the decrease in diastolic blood pressure seen during tilt in normovolemia (p = 0.006) and hypovolemia w/o countermeasure (p = 0.041). Hypovolemia w/o countermeasure showed a higher tilt-induced heart rate increase (p = 0.022) than seen in normovolemia; heart rate while wearing BS was not different than normovolemia (p = 0.353). Conclusion: BS may

  8. Rates of induced abortion in Denmark according to age, previous births and previous abortions

    Directory of Open Access Journals (Sweden)

    Marie-Louise H. Hansen

    2009-11-01

    Full Text Available Background: Whereas the effects of various socio-demographic determinants on a woman's risk of having an abortion are relatively well-documented, less attention has been given to the effect of previous abortions and births. Objective: To study the effect of previous abortions and births on Danish women's risk of an abortion, in addition to a number of demographic and personal characteristics. Data and methods: From the Fertility of Women and Couples Dataset we obtained data on the number of live births and induced abortions by year (1981-2001, age (16-39, county of residence and marital status. Logistic regression analysis was used to estimate the influence of the explanatory variables on the probability of having an abortion in a relevant year. Main findings and conclusion: A woman's risk of having an abortion increases with the number of previous births and previous abortions. Some interactions were was found in the way a woman's risk of abortion varies with calendar year, age and parity. The risk of an abortion for women with no children decreases while the risk of an abortion for women with children increases over time. Furthermore, the risk of an abortion decreases with age, but relatively more so for women with children compared to childless women. Trends for teenagers are discussed in a separate section.

  9. A Deterministic Construction of Projection matrix for Adaptive Trajectory Compression

    OpenAIRE

    Rana, Rajib; Yang, Mingrui; Wark, Tim; Chou, Chun Tung; Hu, Wen

    2013-01-01

    Compressive Sensing, which offers exact reconstruction of sparse signal from a small number of measurements, has tremendous potential for trajectory compression. In order to optimize the compression, trajectory compression algorithms need to adapt compression ratio subject to the compressibility of the trajectory. Intuitively, the trajectory of an object moving in starlight road is more compressible compared to the trajectory of a object moving in winding roads, therefore, higher compression ...

  10. Compression for preventing recurrence of venous ulcers.

    Science.gov (United States)

    Nelson, E Andrea; Bell-Syer, Sally E M

    2014-09-09

    Up to 1% of adults will have a leg ulcer at some time. The majority of leg ulcers are venous in origin and are caused by high pressure in the veins due to blockage or weakness of the valves in the veins of the leg. Prevention and treatment of venous ulcers is aimed at reducing the pressure either by removing/repairing the veins, or by applying compression bandages/stockings to reduce the pressure in the veins.The majority of venous ulcers heal with compression bandages, however ulcers frequently recur. Clinical guidelines therefore recommend that people continue to wear compression, usually in the form of hosiery (tights, stockings, socks) after their ulcer heals, to prevent recurrence. To assess the effects of compression (socks, stockings, tights, bandages) in preventing the recurrence of venous ulcers. If compression does prevent ulceration compared with no compression, then to identify whether there is evidence to recommend particular levels of compression (high, medium or low, for example), types of compression, or brands of compression to prevent ulcer recurrence after healing. For this second update we searched The Cochrane Wounds Group Specialised Register (searched 4 September 2014) which includes the results of regular searches of MEDLINE, EMBASE and CINAHL; The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2014, Issue 8). Randomised controlled trials (RCTs)evaluating compression bandages or hosiery for preventing the recurrence of venous ulcers. Two review authors undertook data extraction and risk of bias assessment independently. Four trials (979 participants) were eligible for inclusion in this review. One trial in patients with recently healed venous ulcers (n = 153) compared recurrence rates with and without compression and found that compression significantly reduced ulcer recurrence at six months (Risk ratio (RR) 0.46, 95% CI 0.27 to 0.76).Two trials compared high-compression hosiery (equivalent to UK class 3) with

  11. Vertebral compression exacerbates osteoporotic pain in an ovariectomy-induced osteoporosis rat model.

    Science.gov (United States)

    Suzuki, Miyako; Orita, Sumihisa; Miyagi, Masayuki; Ishikawa, Tetsuhiro; Kamoda, Hiroto; Eguchi, Yawara; Arai, Gen; Yamauchi, Kazuyo; Sakuma, Yoshihiro; Oikawa, Yasuhiro; Kubota, Go; Inage, Kazuhide; Sainoh, Takeshi; Kawarai, Yuya; Yoshino, Kensuke; Ozawa, Tomoyuki; Aoki, Yasuchika; Toyone, Tomoaki; Takahashi, Kazuhisa; Kawakami, Mamoru; Ohtori, Seiji; Inoue, Gen

    2013-11-15

    Basic pain study using osteoporotic rodent models. To examine alterations in distribution of pain-related neuropeptides after compressive force on osteoporotic vertebrae and their chronic pain-related properties. We previously reported significantly increased production of calcitonin gene-related peptide (CGRP), a marker of inflammatory pain, in the dorsal root ganglia (DRG) of vertebrae in osteoporosis-model ovariectomized (OVX) rats. Here, we hypothesized that longitudinal compressive force on vertebrae can affect osteoporotic pain properties, which has not been examined yet. OVX rats were used as the osteoporosis model. Female Sprague-Dawley rats were prepared and Fluoro-Gold (FG) neurotracer was applied to the periosteal surface of the Co5 vertebra. After FG labeling, the animals were divided into 4 groups: Control, Control + compression, OVX, and OVX + compression. The Control groups were not ovariectomized. In the compression groups, K-wires were stabbed transversely through Co4 and Co6 with Co5 compressed longitudinally by rubber bands bridged between the 2. One, 2, 4, and 8 weeks after surgery, bilateral S1 to S3 DRGs were excised for immunofluorescence assays. Expression of CGRP and activating transcription factor 3, a marker of neuronal injury, were compared among the 4 groups. Sustained upregulation of CGRP in DRG neurons was observed after compression of the Co5 vertebra, and Co5 compression caused significant increase in CGRP production in DRG neurons, whereas a greater level of activating transcription factor 3 upregulation was observed in DRGs in OVX rats after dynamic vertebral compression 8 weeks after surgery, implying potential neuropathic pain. There was sustained upregulation of CGRP and activating transcription factor 3 in DRGs in osteoporotic model rats compared with controls, and levels were further enhanced by dynamic vertebral compression. These findings imply that dynamic compression stress on vertebrae can exacerbate osteoporotic pain by

  12. Congruency sequence effects are driven by previous-trial congruency, not previous-trial response conflict

    OpenAIRE

    Weissman, Daniel H.; Carp, Joshua

    2013-01-01

    Congruency effects in distracter interference tasks are often smaller after incongruent trials than after congruent trials. However, the sources of such congruency sequence effects (CSEs) are controversial. The conflict monitoring model of cognitive control links CSEs to the detection and resolution of response conflict. In contrast, competing theories attribute CSEs to attentional or affective processes that vary with previous-trial congruency (incongruent vs. congruent). The present study s...

  13. Simulation of deep coalbed methane permeability and production assuming variable pore volume compressibility

    Energy Technology Data Exchange (ETDEWEB)

    Tonnsen, R.R.; Miskimins, J.L. [Colorado School of Mines, Golden, CO (United States)

    2010-07-01

    This paper presented an alternative view of deep coalbed methane (CBM) permeability, questioning the assumption of using constant pore volume compressibility for modelling permeability changes that would be part of deep CBM production. The sensitivity of coal permeability to changes in stress has led to the assumption that deep coals have limited permeability, but this belief takes for granted that a coal's porosity/cleat system maintains a constant pore volume compressibility during changing stress conditions. Exponentially declining (variable) pore volume compressibility was employed to model changes in permeability related to changing stress conditions within the coalbed. The assumption of constant or variable pore volume compressibility affects modelled permeability changes. Deep coals may maintain higher values during production than previously indicated. The modelled compressibility and permeability results were applied to the simulation of deep CBM reservoirs to determine the practical difference the compressibility assumption has on a coal's simulated production. With the assumption of variable pore volume compressibility, the modelled permeability decreases less than previously indicated, resulting in greater production. The simulation results may justify exploration for deeper CBM reservoirs, although economic production rates are shown to be possible only when the coal is modelled with zero water saturation in the cleat system. 21 refs., 4 tabs., 9 figs.

  14. New insights on compressible turbulent mixing in spectral space

    Science.gov (United States)

    Panickacheril John, John; Donzis, Diego; Sreenivasan, Katepalli

    2017-11-01

    Previous studies have shown that dilatational forcing has an effect in the dynamics of the velocity field in compressible turbulence. However, there has virtually been no studies of these effects on scalar mixing, the specific mechanisms responsible for compressibility effects and the scaling with governing parameters. Using a large DNS database, generated with different ratios of solenoidal to dilatational forcing, we find that the commonly used turbulent Mach number (Mt) fails to parametrize mixing efficiency. Instead, the dilatational Mach number (Mtd) is a better scaling parameter to observe non-monotonic trends. We observe an accumulation of energy at large scales when compressibility is high; this has an effect on the energy and scalar cascade. We analyze both budgets to assess changes in global and inter-scale statistics for each mode and their interactions. For moderate compressibility levels, the normalized spectra of both modes do not collapse even when their own dissipation rates are used. Furthermore, a dilatational cascade is formed at high compressibilty levels with advection terms scaling with χ, the ratio of dilatational to total kinetic energy. Results on scalar dissipation and their relation to thermodynamic variables are also presented. Support from NSF is gratefully acknowledged.

  15. Adaptation to time-compressed speech: phonological determinants.

    Science.gov (United States)

    Sebastián-Gallés, N; Dupoux, E; Costa, A; Mehler, J

    2000-05-01

    Perceptual adaptation to time-compressed speech was analyzed in two experiments. Previous research has suggested that this adaptation phenomenon is language specific and takes place at the phonological level. Moreover, it has been proposed that adaptation should only be observed for languages that are rhythmically similar. This assumption was explored by studying adaptation to different time-compressed languages in Spanish speakers. In Experiment 1, the performances of Spanish-speaking subjects who adapted to Spanish, Italian, French, English, and Japanese were compared. In Experiment 2, subjects from the same population were tested with Greek sentences compressed to two different rates. The results showed adaptation for Spanish, Italian, and Greek and no adaptation for English and Japanese, with French being an intermediate case. To account for the data, we propose that variables other than just the rhythmic properties of the languages, such as the vowel system and/or the lexical stress pattern, must be considered. The Greek data also support the view that phonological, rather than lexical, information is a determining factor in adaptation to compressed speech.

  16. Splanchnic Compression Improves the Efficacy of Compression Stockings to Prevent Orthostatic Intolerance

    Science.gov (United States)

    Platts, Steven H.; Brown, A. K.; Lee, S. M.; Stenger, M. B.

    2009-01-01

    Purpose: Post-spaceflight orthostatic intolerance (OI) is observed in 20-30% of astronauts. Previous data from our laboratory suggests that this is largely a result of decreased venous return. Currently, NASA astronauts wear an anti-gravity suit (AGS) which consists of inflatable air bladders over the calves, thighs and abdomen, typically pressurized from 26 to 78 mmHg. We recently determined that, thigh-high graded compression stockings (JOBST , 55 mmHg at ankle, 6 mmHg at top of thigh) were effective, though to a lesser degree than the AGS. The purpose of this study was to evaluate the addition of splanchnic compression to prevent orthostatic intolerance. Methods: Ten healthy volunteers (6M, 4F) participated in three 80 head-up tilts on separate days while (1) normovolemic (2) hypovolemic w/ breast-high compression stockings (BS)(JOBST(R), 55 mmHg at the ankle, 6 mmHg at top of thigh, 12 mmHg over abdomen) (3) hypovolemic w/o stockings. Hypovolemia was induced by IV infusion of furosemide (0.5 mg/kg) and 48 hrs of a low salt diet to simulate plasma volume loss following space flight. Hypovolemic testing occurred 24 and 48 hrs after furosemide. One-way repeated measures ANOVA, with Bonferroni corrections, was used to test for differences in blood pressure and heart rate responses to head-up tilt, stand times were compared using a Kaplan-Meyer survival analysis. Results: BS were effective in preventing OI and presyncope in hypovolemic test subjects ( p = 0.015). BS prevented the decrease in systolic blood pressure seen during tilt in normovolemia (p high garments. These stockings are readily available, inexpensive, and can be worn for days following landing as astronauts re-adapt to Earth gravity.

  17. Dependence of compressive strength of green compacts on pressure, density and contact area of powder particles

    International Nuclear Information System (INIS)

    Salam, A.; Akram, M.; Shahid, K.A.; Javed, M.; Zaidi, S.M.

    1994-08-01

    The relationship between green compressive strength and compacting pressure as well as green density has been investigated for uniaxially pressed aluminium powder compacts in the range 0 - 520 MPa. Two linear relationships occurred between compacting pressure and green compressive strength which corresponded to powder compaction stages II and III respectively, increase in strength being large during stage II and quite small in stage III with increasing pressure. On the basis of both, the experimental results and a previous model on cold compaction of powder particles, relationships between green compressive strength and green density and interparticle contact area of the compacts has been established. (author) 9 figs

  18. Experimental investigation on yield behavior of PMMA under combined shear–compression loading

    Directory of Open Access Journals (Sweden)

    Jianjun Zhang

    Full Text Available The work experimentally studies the yielding behavior of polymethyl methacrylate (PMMA at three different loading rates through a developed combined shear–compression test technique which contains a universal materials testing machine, mental blocks with double beveled ends (combined shear–compression loading setup and a column sleeve made of Teflon. The results show that the failure loci agree well with theoretical predictions involving the strain rate dependence, which indicates the validity of this test method. Additionally, the experimental data enrich the previous experimental work about polymer yielding surface in the principle stress space. Keywords: PMMA, Mechanical properties, Engineering plastic, Combined shear–compression, Yield surface

  19. [A brief history of resuscitation - the influence of previous experience on modern techniques and methods].

    Science.gov (United States)

    Kucmin, Tomasz; Płowaś-Goral, Małgorzata; Nogalski, Adam

    2015-02-01

    Cardiopulmonary resuscitation (CPR) is relatively novel branch of medical science, however first descriptions of mouth-to-mouth ventilation are to be found in the Bible and literature is full of descriptions of different resuscitation methods - from flagellation and ventilation with bellows through hanging the victims upside down and compressing the chest in order to stimulate ventilation to rectal fumigation with tobacco smoke. The modern history of CPR starts with Kouwenhoven et al. who in 1960 published a paper regarding heart massage through chest compressions. Shortly after that in 1961Peter Safar presented a paradigm promoting opening the airway, performing rescue breaths and chest compressions. First CPR guidelines were published in 1966. Since that time guidelines were modified and improved numerously by two leading world expert organizations ERC (European Resuscitation Council) and AHA (American Heart Association) and published in a new version every 5 years. Currently 2010 guidelines should be obliged. In this paper authors made an attempt to present history of development of resuscitation techniques and methods and assess the influence of previous lifesaving methods on nowadays technologies, equipment and guidelines which allow to help those women and men whose life is in danger due to sudden cardiac arrest. © 2015 MEDPRESS.

  20. Effective radiation attenuation calibration for breast density: compression thickness influences and correction

    OpenAIRE

    Thomas Jerry A; Cao Ke; Heine John J

    2010-01-01

    Abstract Background Calibrating mammograms to produce a standardized breast density measurement for breast cancer risk analysis requires an accurate spatial measure of the compressed breast thickness. Thickness inaccuracies due to the nominal system readout value and compression paddle orientation induce unacceptable errors in the calibration. Method A thickness correction was developed and evaluated using a fully specified two-component surrogate breast model. A previously developed calibrat...

  1. Episodic cauda equina compression from an intradural lumbar herniated disc: a case of 'floppy disc'.

    Science.gov (United States)

    Nagaria, J; Chan, Cc; Kamel, Mh; McEvoy, L; Bolger, C

    2011-09-01

    Intradural disc herniation (IDDH) is a rare complication of intervertebral disc disease and comprises 0.26-0.30% of all herniated discs, with 92% of them located in the lumbar region (1). We present a case of IDDH that presented with intermittent symptoms and signs of cauda equina compression. We were unable to find in the literature, any previously described cases of intermittent cauda equina compression from a herniated intradural disc fragment leading to a "floppy disc syndrome". © JSCR.

  2. Improved Approximate String Matching and Regular Expression Matching on Ziv-Lempel Compressed Texts

    DEFF Research Database (Denmark)

    Bille, Philip; Fagerberg, Rolf; Gørtz, Inge Li

    2007-01-01

    We study the approximate string matching and regular expression matching problem for the case when the text to be searched is compressed with the Ziv-Lempel adaptive dictionary compression schemes. We present a time-space trade-off that leads to algorithms improving the previously known...... complexities for both problems. In particular, we significantly improve the space bounds. In practical applications the space is likely to be a bottleneck and therefore this is of crucial importance....

  3. Improved Approximate String Matching and Regular Expression Matching on Ziv-Lempel Compressed Texts

    DEFF Research Database (Denmark)

    Bille, Philip; Fagerberg, Rolf; Gørtz, Inge Li

    2009-01-01

    We study the approximate string matching and regular expression matching problem for the case when the text to be searched is compressed with the Ziv-Lempel adaptive dictionary compression schemes. We present a time-space trade-off that leads to algorithms improving the previously known...... complexities for both problems. In particular, we significantly improve the space bounds, which in practical applications are likely to be a bottleneck....

  4. Compressible turbulence in one dimension

    Science.gov (United States)

    Fleischer, Jason Wolf

    1999-11-01

    The Burgers' model of compressible fluid dynamics in one dimension is extended to include the effects of pressure back-reaction. The new system consists of two coupled equations: Burgers' equation with a pressure gradient (essentially the 1-D Navier-Stokes equation) and an advection-diffusion equation for the pressure field. It presents a minimal model of both adiabatic gas dynamics and compressible magnetohydrodynamics. From the magnetic perspective, it is the simplest possible system which allows for Alfvenization, i.e. energy transfer between the fluid and the magnetic field. For the special case of equal fluid viscosity and (magnetic) diffusivity, the system is completely integrable, reducing to two decoupled Burgers' equations in the characteristic variables v +/- vsound ( v +/- vAlfven). For arbitrary diffusivities, renormalized perturbation theory is used to calculate the effective transport coefficients for forced Burgerlence. It is shown that energy equi- dissipation, not equipartition, is fundamental to the turbulent state. Both energy and dissipation are localized to shock-like structures, in which wave steepening is inhibited by small-scale forcing and by pressure back-reaction. The spectral forms predicted by theory are confirmed by numerical simulations. It is shown that the velocity structures lead to an asymmetric velocity PDF, as in Burgers' turbulence. Pressure fluctuations, however, are symmetrically distributed. A Fokker-Planck calculation of these distributions is compared and contrasted with a path integral approach. The latter instanton solution suggests that the system maintains its characteristic directions in steady-state turbulence, supporting the results from perturbation theory. Implications for the spectra of turbulence and self-organization phenomena in compressible fluids and plasmas are also discussed.

  5. Image Quality Meter Using Compression

    Directory of Open Access Journals (Sweden)

    Muhammad Ibrar-Ul-Haque

    2016-01-01

    Full Text Available This paper proposed a new technique to compressed image blockiness/blurriness in frequency domain through edge detection method by applying Fourier transform. In image processing, boundaries are characterized by edges and thus, edges are the problems of fundamental importance. The edges have to be identified and computed thoroughly in order to retrieve the complete illustration of the image. Our novel edge detection scheme for blockiness and blurriness shows improvement of 60 and 100 blocks for high frequency components respectively than any other detection technique.

  6. Krylov methods for compressible flows

    Science.gov (United States)

    Tidriri, M. D.

    1995-01-01

    We investigate the application of Krylov methods to compressible flows, and the effect of implicit boundary conditions on the implicit solution of nonlinear problems. Two defect-correction procedures, namely, approximate factorization (AF) for structured grids and ILU/GMRES for general grids, are considered. Also considered here are Newton-Krylov matrix-free methods that we combined with the use of mixed discretization schemes in the implicitly defined Jacobian and its preconditioner. Numerical experiments that show the performance of our approaches are then presented.

  7. Protostellar Collapse Induced by Compression

    OpenAIRE

    Hennebelle, P.; Whitworth, A. P.; Gladwin, P. P.; Andre, Ph.

    2002-01-01

    We present numerical simulations of the evolution of low-mass, isothermal, molecular cores which are subjected to an increase in external pressure $P\\xt$. If $P\\xt$ increases very slowly, the core approaches instability quite quasistatically. However, for larger (but still quite modest) $dP\\xt/dt$ a compression wave is driven into the core, thereby triggering collapse from the outside in. If collapse of a core is induced by increasing $P\\xt$, this has a number of interesting consequences. (i)...

  8. Less is More: Bigger Data from Compressive Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew; Browning, Nigel D.

    2017-07-01

    Compressive sensing approaches are beginning to take hold in (scanning) transmission electron microscopy (S/TEM) [1,2,3]. Compressive sensing is a mathematical theory about acquiring signals in a compressed form (measurements) and the probability of recovering the original signal by solving an inverse problem [4]. The inverse problem is underdetermined (more unknowns than measurements), so it is not obvious that recovery is possible. Compression is achieved by taking inner products of the signal with measurement weight vectors. Both Gaussian random weights and Bernoulli (0,1) random weights form a large class of measurement vectors for which recovery is possible. The measurements can also be designed through an optimization process. The key insight for electron microscopists is that compressive sensing can be used to increase acquisition speed and reduce dose. Building on work initially developed for optical cameras, this new paradigm will allow electron microscopists to solve more problems in the engineering and life sciences. We will be collecting orders of magnitude more data than previously possible. The reason that we will have more data is because we will have increased temporal/spatial/spectral sampling rates, and we will be able ability to interrogate larger classes of samples that were previously too beam sensitive to survive the experiment. For example consider an in-situ experiment that takes 1 minute. With traditional sensing, we might collect 5 images per second for a total of 300 images. With compressive sensing, each of those 300 images can be expanded into 10 more images, making the collection rate 50 images per second, and the decompressed data a total of 3000 images [3]. But, what are the implications, in terms of data, for this new methodology? Acquisition of compressed data will require downstream reconstruction to be useful. The reconstructed data will be much larger than traditional data, we will need space to store the reconstructions during

  9. Industrial Compressed Air System Energy Efficiency Guidebook.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration.

    1993-12-01

    Energy efficient design, operation and maintenance of compressed air systems in industrial plants can provide substantial reductions in electric power and other operational costs. This guidebook will help identify cost effective, energy efficiency opportunities in compressed air system design, re-design, operation and maintenance. The guidebook provides: (1) a broad overview of industrial compressed air systems, (2) methods for estimating compressed air consumption and projected air savings, (3) a description of applicable, generic energy conservation measures, and, (4) a review of some compressed air system demonstration projects that have taken place over the last two years. The primary audience for this guidebook includes plant maintenance supervisors, plant engineers, plant managers and others interested in energy management of industrial compressed air systems.

  10. Compressed sensing for STEM tomography.

    Science.gov (United States)

    Donati, Laurène; Nilchian, Masih; Trépout, Sylvain; Messaoudi, Cédric; Marco, Sergio; Unser, Michael

    2017-08-01

    A central challenge in scanning transmission electron microscopy (STEM) is to reduce the electron radiation dosage required for accurate imaging of 3D biological nano-structures. Methods that permit tomographic reconstruction from a reduced number of STEM acquisitions without introducing significant degradation in the final volume are thus of particular importance. In random-beam STEM (RB-STEM), the projection measurements are acquired by randomly scanning a subset of pixels at every tilt view. In this work, we present a tailored RB-STEM acquisition-reconstruction framework that fully exploits the compressed sensing principles. We first demonstrate that RB-STEM acquisition fulfills the "incoherence" condition when the image is expressed in terms of wavelets. We then propose a regularized tomographic reconstruction framework to recover volumes from RB-STEM measurements. We demonstrate through simulations on synthetic and real projection measurements that the proposed framework reconstructs high-quality volumes from strongly downsampled RB-STEM data and outperforms existing techniques at doing so. This application of compressed sensing principles to STEM paves the way for a practical implementation of RB-STEM and opens new perspectives for high-quality reconstructions in STEM tomography. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest.

    Science.gov (United States)

    Monsieurs, Koenraad G; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F; Calle, Paul A

    2012-11-01

    BACKGROUND AND GOAL OF STUDY: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with decreased depth. In patients undergoing prehospital cardiopulmonary resuscitation by health care professionals, chest compression rate and depth were recorded using an accelerometer (E-series monitor-defibrillator, Zoll, U.S.A.). Compression depth was compared for rates 120/min. A difference in compression depth ≥0.5 cm was considered clinically significant. Mixed models with repeated measurements of chest compression depth and rate (level 1) nested within patients (level 2) were used with compression rate as a continuous and as a categorical predictor of depth. Results are reported as means and standard error (SE). One hundred and thirty-three consecutive patients were analysed (213,409 compressions). Of all compressions 2% were 120/min, 36% were 5 cm. In 77 out of 133 (58%) patients a statistically significant lower depth was observed for rates >120/min compared to rates 80-120/min, in 40 out of 133 (30%) this difference was also clinically significant. The mixed models predicted that the deepest compression (4.5 cm) occurred at a rate of 86/min, with progressively lower compression depths at higher rates. Rates >145/min would result in a depth compression depth for rates 80-120/min was on average 4.5 cm (SE 0.06) compared to 4.1 cm (SE 0.06) for compressions >120/min (mean difference 0.4 cm, Pcompression rates and lower compression depths. Avoiding excessive compression rates may lead to more compressions of sufficient depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  13. How Wage Compression Affects Job Turnover

    OpenAIRE

    Heyman, Fredrik

    2008-01-01

    I use Swedish establishment-level panel data to test Bertola and Rogerson’s (1997) hypothesis of a positive relation between the degree of wage compression and job reallocation. Results indicate that the effect of wage compression on job turnover is positive and significant in the manufacturing sector. The wage compression effect is stronger on job destruction than on job creation, consistent with downward wage rigidity. Further results include a strong positive relationship between the fract...

  14. Artificial Neural Network Model for Predicting Compressive

    OpenAIRE

    Salim T. Yousif; Salwa M. Abdullah

    2013-01-01

      Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum...

  15. Compressed Air Production Using Vehicle Suspension

    OpenAIRE

    Ninad Arun Malpure; Sanket Nandlal Bhansali

    2015-01-01

    Abstract Generally compressed air is produced using different types of air compressors which consumes lot of electric energy and is noisy. In this paper an innovative idea is put forth for production of compressed air using movement of vehicle suspension which normal is wasted. The conversion of the force energy into the compressed air is carried out by the mechanism which consists of the vehicle suspension system hydraulic cylinder Non-return valve air compressor and air receiver. We are co...

  16. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  17. Eccentric crank variable compression ratio mechanism

    Science.gov (United States)

    Lawrence, Keith Edward [Kobe, JP; Moser, William Elliott [Peoria, IL; Roozenboom, Stephan Donald [Washington, IL; Knox, Kevin Jay [Peoria, IL

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  18. Study of CSR longitudinal bunch compression cavity

    International Nuclear Information System (INIS)

    Yin Dayu; Li Peng; Liu Yong; Xie Qingchun

    2009-01-01

    The scheme of longitudinal bunch compression cavity for the Cooling Storage Ring (CSR)is an important issue. Plasma physics experiments require high density heavy ion beam and short pulsed bunch,which can be produced by non-adiabatic compression of bunch implemented by a fast compression with 90 degree rotation in the longitudinal phase space. The phase space rotation in fast compression is initiated by a fast jump of the RF-voltage amplitude. For this purpose, the CSR longitudinal bunch compression cavity, loaded with FINEMET-FT-1M is studied and simulated with MAFIA code. In this paper, the CSR longitudinal bunch compression cavity is simulated and the initial bunch length of 238 U 72+ with 250 MeV/u will be compressed from 200 ns to 50 ns.The construction and RF properties of the CSR longitudinal bunch compression cavity are simulated and calculated also with MAFIA code. The operation frequency of the cavity is 1.15 MHz with peak voltage of 80 kV, and the cavity can be used to compress heavy ions in the CSR. (authors)

  19. Compressed Sensing with Rank Deficient Dictionaries

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Johansen, Daniel Højrup; Jørgensen, Peter Bjørn

    2012-01-01

    In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, i.e. rank deficient dictionaries. We show that in this case the signal-to-noise ratio...... (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary. As an example application of compressed sensing with a rank deficient dictionary, we present a case study of compressed sensing applied to the Coarse Acquisition (C...

  20. Energy Conservation In Compressed Air Systems

    International Nuclear Information System (INIS)

    Yusuf, I.Y.; Dewu, B.B.M.

    2004-01-01

    Compressed air is an essential utility that accounts for a substantial part of the electricity consumption (bill) in most industrial plants. Although the general saying Air is free of charge is not true for compressed air, the utility's cost is not accorded the rightful importance due to its by most industries. The paper will show that the cost of 1 unit of energy in the form of compressed air is at least 5 times the cost electricity (energy input) required to produce it. The paper will also provide energy conservation tips in compressed air systems

  1. Adaptive Multi-rate Compression Effects on Vowel Analysis

    Directory of Open Access Journals (Sweden)

    David eIreland

    2015-08-01

    Full Text Available Signal processing on digitally sampled vowel sounds for the detection of pathological voices has been firmly established. This work examines compression artefacts on vowel speech samples that have been compressed using the adaptive multi-rate codec at various bit-rates. Whereas previous work has used the sensitivity of machine learning algorithm to test for accuracy, this work examines the changes in the extracted speech features themselves and thus report new findings on the usefulness of a particular feature. We believe this work will have potential impact for future research on remote monitoring as the identification and exclusion of a ill-defined speech feature that has been hitherto used, will ultimately increase the robustness of the system.

  2. Adaptive Multi-Rate Compression Effects on Vowel Analysis.

    Science.gov (United States)

    Ireland, David; Knuepffer, Christina; McBride, Simon J

    2015-01-01

    Signal processing on digitally sampled vowel sounds for the detection of pathological voices has been firmly established. This work examines compression artifacts on vowel speech samples that have been compressed using the adaptive multi-rate codec at various bit-rates. Whereas previous work has used the sensitivity of machine learning algorithm to test for accuracy, this work examines the changes in the extracted speech features themselves and thus report new findings on the usefulness of a particular feature. We believe this work will have potential impact for future research on remote monitoring as the identification and exclusion of an ill-defined speech feature that has been hitherto used, will ultimately increase the robustness of the system.

  3. Soil Compressibility Models for a Wide Stress Range

    KAUST Repository

    Chong, Song-Hun

    2016-03-03

    Soil compressibility models with physically correct asymptotic void ratios are required to analyze situations that involve a wide stress range. Previously suggested models and other functions are adapted to satisfy asymptotic void ratios at low and high stress levels; all updated models involve four parameters. Compiled consolidation data for remolded and natural clays are used to test the models and to develop correlations between model parameters and index properties. Models can adequately fit soil compression data for a wide range of stresses and soil types; in particular, models that involve the power of the stress σ\\'β display higher flexibility to capture the brittle response of some natural soils. The use of a single continuous function avoids numerical discontinuities or the need for ad hoc procedures to determine the yield stress. The tangent stiffness-readily computed for all models-should not be mistaken for the small-strain constant-fabric stiffness. © 2016 American Society of Civil Engineers.

  4. Effect of Compression Ratio on Perception of Time Compressed Phonemically Balanced Words in Kannada and Monosyllables.

    Science.gov (United States)

    Prabhu, Prashanth; Sujan, Mirale Jagadish; Rakshith, Satish

    2015-01-21

    The present study attempted to study perception of time-compressed speech and the effect of compression ratio for phonemically balanced (PB) word lists in Kannada and monosyllables. The test was administered on 30 normal hearing individuals at compression ratios of 40%, 50%, 60%, 70% and 80% for PB words in Kannada and monosyllables. The results of the study showed that the speech identification scores for time-compressed speech reduced with increase in compression ratio. The scores were better for monosyllables compared to PB words especially at higher compression ratios. The study provides speech identification scores at different compression ratio for PB words and monosyllables in individuals with normal hearing. The results of the study also showed that the scores did not vary across gender for all the compression ratios for both the stimuli. The same test material needs to be compared the clinical population with central auditory processing disorder for clinical validation of the present results.

  5. Effect of compression ratio on perception of time compressed phonemically balanced words in Kannada and monosyllables

    Directory of Open Access Journals (Sweden)

    Prashanth Prabhu

    2015-03-01

    Full Text Available The present study attempted to study perception of time-compressed speech and the effect of compression ratio for phonemically balanced (PB word lists in Kannada and monosyllables. The test was administered on 30 normal hearing individuals at compression ratios of 40%, 50%, 60%, 70% and 80% for PB words in Kannada and monosyllables. The results of the study showed that the speech identification scores for time-compressed speech reduced with increase in compression ratio. The scores were better for monosyllables compared to PB words especially at higher compression ratios. The study provides speech identification scores at different compression ratio for PB words and monosyllables in individuals with normal hearing. The results of the study also showed that the scores did not vary across gender for all the compression ratios for both the stimuli. The same test material needs to be compared the clinical population with central auditory processing disorder for clinical validation of the present results.

  6. Numerical study of the effects of carbon felt electrode compression in all-vanadium redox flow batteries

    International Nuclear Information System (INIS)

    Oh, Kyeongmin; Won, Seongyeon; Ju, Hyunchul

    2015-01-01

    Highlights: • The effects of electrode compression on VRFB are examined. • The electronic conductivity is improved when the compression is increased. • The kinetic losses are similar regardless of the electrode compression level. • The vanadium distribution is more uniform within highly compressed electrode. - Abstract: The porous carbon felt electrode is one of the major components of all-vanadium redox flow batteries (VRFBs). These electrodes are necessarily compressed during stack assembly to prevent liquid electrolyte leakage and diminish the interfacial contact resistance among VRFB stack components. The porous structure and properties of carbon felt electrodes have a considerable influence on the electrochemical reactions, transport features, and cell performance. Thus, a numerical study was performed herein to investigate the effects of electrode compression on the charge and discharge behavior of VRFBs. A three-dimensional, transient VRFB model developed in a previous study was employed to simulate VRFBs under two degrees of electrode compression (10% vs. 20%). The effects of electrode compression were precisely evaluated by analysis of the solid/electrolyte potential profiles, transfer current density, and vanadium concentration distributions, as well as the overall charge and discharge performance. The model predictions highlight the beneficial impact of electrode compression; the electronic conductivity of the carbon felt electrode is the main parameter improved by electrode compression, leading to reduction in ohmic loss through the electrodes. In contrast, the kinetics of the redox reactions and transport of vanadium species are not significantly altered by the degree of electrode compression (10% to 20%). This study enhances the understanding of electrode compression effects and demonstrates that the present VRFB model is a valuable tool for determining the optimal design and compression of carbon felt electrodes in VRFBs.

  7. Compressive creep of silicon nitride

    International Nuclear Information System (INIS)

    Silva, C.R.M. da; Melo, F.C.L. de; Cairo, C.A.; Piorino Neto, F.

    1990-01-01

    Silicon nitride samples were formed by pressureless sintering process, using neodymium oxide and a mixture of neodymium oxide and yttrio oxide as sintering aids. The short term compressive creep behaviour was evaluated over a stress range of 50-300 MPa and temperature range 1200 - 1350 0 C. Post-sintering heat treatments in nitrogen with a stepwise decremental variation of temperature were performed in some samples and microstructural analysis by X-ray diffraction and transmission electron microscopy showed that the secondary crystalline phase which form from the remnant glass are dependent upon composition and percentage of aditives. Stress exponent values near to unity were obtained for materials with low glass content suggesting grain boundary diffusion accommodation processes. Cavitation will thereby become prevalent with increase in stress, temperature and decrease in the degree of crystallization of the grain boundary phase. (author) [pt

  8. Image Compression by Gabor Expansion

    Directory of Open Access Journals (Sweden)

    J. Chmurny

    2001-06-01

    Full Text Available Transform-based coding methods are popular in data compression. Inthe paper, an easily implemented method is proposed for the weightingfactors of the Gabor decomposition. The method is based on theleast-mean- squares error (LMSE approach. The solution of the LMSEproblem shows that the weighting factors can be extracted by simplemultiplication between a matrix and the vector of data. If the setGabor functions are chosen to be independent of the test images, thismatrix is constant. Images are reconstructed by multiplying the matrixof Gabor functions and the vector of weighting factors. The choise ofGabor functions in the decomposition allows that the resultingdecomposition has a pyramidal structure. In the paper is proposedsimple codec system for pyramidal Gabor expansion for imagecompression.

  9. Nuclear transmutation by flux compression

    International Nuclear Information System (INIS)

    Seifritz, W.

    2001-01-01

    A new idea for the transmutation of minor actinides, long (and even short) lived fission products is presented. It is based an the property of neutron flux compression in nuclear (fast and/or thermal) reactors possessing spatially non-stationary critical masses. An advantage factor for the burn-up fluence of the elements to be transmuted in the order of magnitude of 100 and more is obtainable compared with the classical way of transmutation. Three typical examples of such transmuters (a subcritical ringreactor with a rotating reflector, a sub-critical ring reactor with a rotating spallation source, the socalled ''pulsed energy amplifier'', and a fast burn-wave reactor) are presented and analysed with regard to this purpose. (orig.) [de

  10. Compressed Sensing Based Interior Tomography

    Science.gov (United States)

    Yu, Hengyong; Wang, Ge

    2010-01-01

    While the conventional wisdom is that the interior problem does not have a unique solution, by analytic continuation we recently showed that the interior problem can be uniquely and stably solved if we have a known sub-region inside a region-of-interest (ROI). However, such a known sub-region does not always readily available, and it is even impossible to find in some cases. Based on the compressed sensing theory, here we prove that if an object under reconstruction is essentially piecewise constant, a local ROI can be exactly and stably reconstructed via the total variation minimization. Because many objects in CT applications can be approximately modeled as piecewise constant, our approach is practically useful and suggests a new research direction of interior tomography. To illustrate the merits of our finding, we develop an iterative interior reconstruction algorithm that minimizes the total variation of a reconstructed image, and evaluate the performance in numerical simulation. PMID:19369711

  11. Survey of data compression techniques

    Energy Technology Data Exchange (ETDEWEB)

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM`s design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  12. Survey of data compression techniques

    Energy Technology Data Exchange (ETDEWEB)

    Gryder, R.; Hake, K.

    1991-09-01

    PM-AIM must provide to customers in a timely fashion information about Army acquisitions. This paper discusses ways that PM-AIM can reduce the volume of data that must be transmitted between sites. Although this paper primarily discusses techniques of data compression, it also briefly discusses other options for meeting the PM-AIM requirements. The options available to PM-AIM, in addition to hardware and software data compression, include less-frequent updates, distribution of partial updates, distributed data base design, and intelligent network design. Any option that enhances the performance of the PM-AIM network is worthy of consideration. The recommendations of this paper apply to the PM-AIM project in three phases: the current phase, the target phase, and the objective phase. Each recommendation will be identified as (1) appropriate for the current phase, (2) considered for implementation during the target phase, or (3) a feature that should be part of the objective phase of PM-AIM's design. The current phase includes only those measures that can be taken with the installed leased lines. The target phase includes those measures that can be taken in transferring the traffic from the leased lines to the DSNET environment with minimal changes in the current design. The objective phase includes all the things that should be done as a matter of course. The objective phase for PM-AIM appears to be a distributed data base with data for each site stored locally and all sites having access to all data.

  13. Superconductivity in compressed lithium at 20 K.

    Science.gov (United States)

    Shimizu, Katsuya; Ishikawa, Hiroto; Takao, Daigoroh; Yagi, Takehiko; Amaya, Kiichi

    2002-10-10

    Superconductivity at high temperatures is expected in elements with low atomic numbers, based in part on conventional BCS (Bardeen-Cooper-Schrieffer) theory. For example, it has been predicted that when hydrogen is compressed to its dense metallic phase (at pressures exceeding 400 GPa), it will become superconducting with a transition temperature above room temperature. Such pressures are difficult to produce in a laboratory setting, so the predictions are not easily confirmed. Under normal conditions lithium is the lightest metal of all the elements, and may become superconducting at lower pressures; a tentative observation of a superconducting transition in Li has been previously reported. Here we show that Li becomes superconducting at pressures greater than 30 GPa, with a pressure-dependent transition temperature (T(c)) of 20 K at 48 GPa. This is the highest observed T(c) of any element; it confirms the expectation that elements with low atomic numbers will have high transition temperatures, and suggests that metallic hydrogen will have a very high T(c). Our results confirm that the earlier tentative claim of superconductivity in Li was correct.

  14. The dynamics of surge in compression systems

    Indian Academy of Sciences (India)

    In air-compression systems, instabilities occur during operation close to their peak pressure-rise capability. However, the peak efficiency of a compression system lies close to this region of instability. A surge is a violent mode of instability where there is total breakdown of flow in the system and pressure-rise capability is lost ...

  15. Code Compression Schems for Embedded Processors

    Science.gov (United States)

    Horti, Deepa; Jamge, S. B.

    2010-11-01

    Code density is a major requirement in embedded system design since it not only reduces the need for the scarce re-source memory but also implicitly improves further important design parameters like power consumption and performance. Within this paper we have introduced a novel and an efficient approach that belongs to statistical compression schemes as well as dictionary based compression schemes.

  16. TEXT COMPRESSION ALGORITHMS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    S. Senthil

    2011-12-01

    Full Text Available Data Compression may be defined as the science and art of the representation of information in a crisply condensed form. For decades, Data compression has been one of the critical enabling technologies for the ongoing digital multimedia revolution. There are a lot of data compression algorithms which are available to compress files of different formats. This paper provides a survey of different basic lossless data compression algorithms. Experimental results and comparisons of the lossless compression algorithms using Statistical compression techniques and Dictionary based compression techniques were performed on text data. Among the Statistical coding techniques, the algorithms such as Shannon-Fano Coding, Huffman coding, Adaptive Huffman coding, Run Length Encoding and Arithmetic coding are considered. Lempel Ziv scheme which is a dictionary based technique is divided into two families: one derived from LZ77 (LZ77, LZSS, LZH, LZB and LZR and the other derived from LZ78 (LZ78, LZW, LZFG, LZC and LZT. A set of interesting conclusions are derived on this basis.

  17. Compression of Short Text on Embedded Systems

    DEFF Research Database (Denmark)

    Rein, S.; Gühmann, C.; Fitzek, Frank

    2006-01-01

    The paper details a scheme for lossless compression of a short data series larger than 50 bytes. The method uses arithmetic coding and context modelling with a low-complexity data model. A data model that takes 32 kBytes of RAM already cuts the data size in half. The compression scheme just takes...

  18. Compression and fast retrieval of SNP data.

    Science.gov (United States)

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Application of Compressive Sensing to Digital Holography

    Science.gov (United States)

    2015-05-01

    AFRL-RY-WP-TR-2015-0071 APPLICATION OF COMPRESSIVE SENSING TO DIGITAL HOLOGRAPHY Mark Neifeld University of Arizona...LADAR Technology Branch LADAR Technology Branch Multispectral Sensing and Detection Division Multispectral Sensing and Detection Division...From - To) May 2015 Final 3 September 2013 – 27 February 2015 4. TITLE AND SUBTITLE APPLICATION OF COMPRESSIVE SENSING TO DIGITAL HOLOGRAPHY 5a

  20. Video Coding Technique using MPEG Compression Standards ...

    African Journals Online (AJOL)

    Some application areas of video compression focused on the problem of optimizing storage space and transmission bandwidth (BW). The two dimensional discrete cosine transform (2-D DCT) is an integral part of video and image compression, which is used in Moving Picture Expert Group (MPEG) encoding standards.

  1. Percutanous vertebroplasty for vertebral compression fracture in ...

    African Journals Online (AJOL)

    Background: Osteoporotic vertebral fractures are common in the geriatric age group. Treatment options are influenced by the severity of symptoms, the presence or otherwise of spinal cord compression, level of spinal compression, degree of vertebral height collapse and the integrity of the posterior spinal elements. Aim: We ...

  2. Rupture of esophagus by compressed air.

    Science.gov (United States)

    Wu, Jie; Tan, Yuyong; Huo, Jirong

    2016-11-01

    Currently, beverages containing compressed air such as cola and champagne are widely used in our daily life. Improper ways to unscrew the bottle, usually by teeth, could lead to an injury, even a rupture of the esophagus. This letter to editor describes a case of esophageal rupture caused by compressed air.

  3. Recoil Experiments Using a Compressed Air Cannon

    Science.gov (United States)

    Taylor, Brett

    2006-01-01

    Ping-Pong vacuum cannons, potato guns, and compressed air cannons are popular and dramatic demonstrations for lecture and lab. Students enjoy them for the spectacle, but they can also be used effectively to teach physics. Recently we have used a student-built compressed air cannon as a laboratory activity to investigate impulse, conservation of…

  4. Insertion profiles of 4 headless compression screws.

    Science.gov (United States)

    Hart, Adam; Harvey, Edward J; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A

    2013-09-01

    In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. The peak compression occurs at an insertion depth of -3.1 mm, -2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of -2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of -2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws, and enable the surgeon to optimize compression. Copyright

  5. First metatarsophalangeal joint arthrodesis: a new technique of internal fixation by using memory compression staples.

    Science.gov (United States)

    Choudhary, Rakesh K; Theruvil, Bipin; Taylor, Graeme R

    2004-01-01

    A prospective clinical study of first metatarsophalangeal joint arthrodesis using memory compression staples is presented. In 27 patients, 30 feet underwent surgery. There were 24 women and 3 men, with a mean age of 61.2 years. Two memory compression staples were used at right angles to each other to achieve compression at the fusion site. Postoperatively, patients were allowed full weightbearing in a rigid-soled shoe. Subjective assessment was performed with a standard questionnaire, which included questions regarding level of pain, ambulation, and patient satisfaction. Objective assessment was performed by a clinical and included a radiographic examination. There was a postoperative reduction in the pain score from 4.6 to 1.6 (P memory compression staples for the internal fixation of first metatarsophalangeal joint arthrodesis. The implant is low profile, and postoperative cast immobilization is not required. The use of this device has a predictable success rate comparable to previously reported methods.

  6. Mathematical transforms and image compression: A review

    Directory of Open Access Journals (Sweden)

    Satish K. Singh

    2010-07-01

    Full Text Available It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed.

  7. Cascaded Quadratic Soliton Compression in Waveguide Structures

    DEFF Research Database (Denmark)

    Guo, Hairun

    to further push such multi-cycle pulses into few-cycle and even single-cycle. In this thesis, we investigate the high order soliton compression in quadratic nonlinear waveguide structures, which is a one-step pulse compression scheme making use of the soliton regime -- with the spontaneous cancelation...... and self-defocusing Kerr effect so that the soliton is created and the soliton self-compression happens in the normal dispersion region. Meanwhile, the chromatic dispersion in the waveguide is also tunable, understood as the dispersion engineering with structural designs. Therefore, compared to commonly...... used two-step compression scheme with e.g. hollow-core photonic crystal fibers plus a dispersion compensation component, our scheme, called the cascaded quadratic soliton compression (CQSC), provides a simpler setup with larger tunability on the nonlinearity, and could avoid the problem with the self...

  8. Adiabatic Liquid Piston Compressed Air Energy Storage

    DEFF Research Database (Denmark)

    Petersen, Tage; Elmegaard, Brian; Pedersen, Allan Schrøder

    This project investigates the potential of a Compressed Air Energy Storage system (CAES system). CAES systems are used to store mechanical energy in the form of compressed air. The systems use electricity to drive the compressor at times of low electricity demand with the purpose of converting...... the mechanical energy into electricity at times of high electricity demand. Two such systems are currently in operation; one in Germany (Huntorf) and one in the USA (Macintosh, Alabama). In both cases, an underground cavern is used as a pressure vessel for the storage of the compressed air. Both systems...... are in the range of 100 MW electrical power output with several hours of production stored as compressed air. In this range, enormous volumes are required, which make underground caverns the only economical way to design the pressure vessel. Both systems use axial turbine compressors to compress air when charging...

  9. Compression therapy after ankle fracture surgery

    DEFF Research Database (Denmark)

    Winge, R; Bayer, L; Gottlieb, H

    2017-01-01

    PURPOSE: The main purpose of this systematic review was to investigate the effect of compression treatment on the perioperative course of ankle fractures and describe its effect on edema, pain, ankle joint mobility, wound healing complication, length of stay (LOS) and time to surgery (TTS). The aim...... was to suggest a recommendation to clinicians considering implementing compression therapy in the standard care of the ankle fracture patient, based on the existing literature. METHODS: We conducted a systematic search of literature including studies concerning adult patients with unstable ankle fractures...... undergoing surgery, testing either intermittent pneumatic compression, compression bandage and/or compression stocking and reporting its effect on edema, pain, ankle joint mobility, wound healing complication, LOS and TTS. To conclude on data a narrative synthesis was performed. RESULTS: The review included...

  10. Sudden viscous dissipation in compressing plasma turbulence

    Science.gov (United States)

    Davidovits, Seth; Fisch, Nathaniel

    2015-11-01

    Compression of a turbulent plasma or fluid can cause amplification of the turbulent kinetic energy, if the compression is fast compared to the turnover and viscous dissipation times of the turbulent eddies. The consideration of compressing turbulent flows in inviscid fluids has been motivated by the suggestion that amplification of turbulent kinetic energy occurred on experiments at the Weizmann Institute of Science Z-Pinch. We demonstrate a sudden viscous dissipation mechanism whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, which further increases the temperature, feeding back to further enhance the dissipation. Application of this mechanism in compression experiments may be advantageous, if the plasma can be kept comparatively cold during much of the compression, reducing radiation and conduction losses, until the plasma suddenly becomes hot. This work was supported by DOE through contract 67350-9960 (Prime # DOE DE-NA0001836) and by the DTRA.

  11. Interactive computer graphics applications for compressible aerodynamics

    Science.gov (United States)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  12. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Isostatic compression of buffer blocks. Middle scale

    International Nuclear Information System (INIS)

    Ritola, J.; Pyy, E.

    2012-01-01

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  14. Isostatic compression of buffer blocks. Middle scale

    Energy Technology Data Exchange (ETDEWEB)

    Ritola, J.; Pyy, E. [VTT Technical Research Centre of Finland, Espoo (Finland)

    2012-01-15

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  15. Soliton compression to few-cycle pulses with a high quality factor by engineering cascaded quadratic nonlinearities.

    Science.gov (United States)

    Zeng, Xianglong; Guo, Hairun; Zhou, Binbin; Bache, Morten

    2012-11-19

    We propose an efficient approach to improve few-cycle soliton compression with cascaded quadratic nonlinearities by using an engineered multi-section structure of the nonlinear crystal. By exploiting engineering of the cascaded quadratic nonlinearities, in each section soliton compression with a low effective order is realized, and high-quality few-cycle pulses with large compression factors are feasible. Each subsequent section is designed so that the compressed pulse exiting the previous section experiences an overall effective self-defocusing cubic nonlinearity corresponding to a modest soliton order, which is kept larger than unity to ensure further compression. This is done by increasing the cascaded quadratic nonlinearity in the new section with an engineered reduced residual phase mismatch. The low soliton orders in each section ensure excellent pulse quality and high efficiency. Numerical results show that compressed pulses with less than three-cycle duration can be achieved even when the compression factor is very large, and in contrast to standard soliton compression, these compressed pulses have minimal pedestal and high quality factor.

  16. The effect of breast compression on mass conspicuity in digital mammography

    International Nuclear Information System (INIS)

    Saunders, Robert S. Jr; Samei, Ehsan

    2008-01-01

    This study analyzed how the inherent quality of diagnostic information in digital mammography could be affected by breast compression. A digital mammography system was modeled using a Monte Carlo algorithm based on the Penelope program, which has been successfully used to model several medical imaging systems. First, the Monte Carlo program was validated against previous measurements and simulations. Once validated, the Monte Carlo software modeled a digital mammography system by tracking photons through a voxelized software breast phantom, containing anatomical structures and breast masses, and following photons until they were absorbed by a selenium-based flat-panel detector. Simulations were performed for two compression conditions (standard compression and 12.5% reduced compression) and three photon flux conditions (constant flux, constant detector signal, and constant glandular dose). The results showed that reduced compression led to higher scatter fractions, as expected. For the constant photon flux condition, decreased compression also reduced glandular dose. For constant glandular dose, the SdNR for a 4 cm breast was 0.60±0.11 and 0.62±0.11 under standard and reduced compressions, respectively. For the 6 cm case with constant glandular dose, the SdNR was 0.50±0.11 and 0.49±0.10 under standard and reduced compressions, respectively. The results suggest that if a particular imaging system can handle an approximately 10% increase in total tube output and 10% decrease in detector signal, breast compression can be reduced by about 12% in terms of breast thickness with little impact on image quality or dose.

  17. The increase of compressive strength of natural polymer modified concrete with Moringa oleifera

    Science.gov (United States)

    Susilorini, Rr. M. I. Retno; Santosa, Budi; Rejeki, V. G. Sri; Riangsari, M. F. Devita; Hananta, Yan's. Dianaga

    2017-03-01

    Polymer modified concrete is one of some concrete technology innovations to meet the need of strong and durable concrete. Previous research found that Moringa oleifera can be applied as natural polymer modifiers into mortars. Natural polymer modified mortar using Moringa oleifera is proven to increase their compressive strength significantly. In this resesearch, Moringa oleifera seeds have been grinded and added into concrete mix for natural polymer modified concrete, based on the optimum composition of previous research. The research investigated the increase of compressive strength of polymer modified concrete with Moringa oleifera as natural polymer modifiers. There were 3 compositions of natural polymer modified concrete with Moringa oleifera referred to previous research optimum compositions. Several cylinder of 10 cm x 20 cm specimens were produced and tested for compressive strength at age 7, 14, and, 28 days. The research meets conclusions: (1) Natural polymer modified concrete with Moringa oleifera, with and without skin, has higher compressive strength compared to natural polymer modified mortar with Moringa oleifera and also control specimens; (2) Natural polymer modified concrete with Moringa oleifera without skin is achieved by specimens contains Moringa oleifera that is 0.2% of cement weight; and (3) The compressive strength increase of natural polymer modified concrete with Moringa oleifera without skin is about 168.11-221.29% compared to control specimens

  18. Motion-Compensated Compression of Dynamic Voxelized Point Clouds.

    Science.gov (United States)

    De Queiroz, Ricardo L; Chou, Philip A

    2017-05-24

    Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.

  19. Rupture of sigmoid colon caused by compressed air.

    Science.gov (United States)

    Yin, Wan-Bin; Hu, Ji-Lin; Gao, Yuan; Zhang, Xian-Xiang; Zhang, Mao-Shen; Liu, Guang-Wei; Zheng, Xue-Feng; Lu, Yun

    2016-03-14

    Compressed air has been generally used since the beginning of the 20(th) century for various applications. However, rupture of the colon caused by compressed air is uncommon. We report a case of pneumatic rupture of the sigmoid colon. The patient was admitted to the emergency room complaining of abdominal pain and distention. His colleague triggered a compressed air nozzle against his anus as a practical joke 2 h previously. On arrival, his pulse rate was 126 beats/min, respiratory rate was 42 breaths/min and blood pressure was 86/54 mmHg. Physical examination revealed peritoneal irritation and the abdomen was markedly distended. Computed tomography of the abdomen showed a large volume of air in the abdominal cavity. Peritoneocentesis was performed to relieve the tension pneumoperitoneum. Emergency laparotomy was done after controlling shock. Laparotomy revealed a 2-cm perforation in the sigmoid colon. The perforation was sutured and temporary ileostomy was performed as well as thorough drainage and irrigation of the abdominopelvic cavity. Reversal of ileostomy was performed successfully after 3 mo. Follow-up was uneventful. We also present a brief literature review.

  20. Development of 1D Liner Compression Code for IDL

    Science.gov (United States)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  1. Compressive sensing for spatial and spectral flame diagnostics.

    Science.gov (United States)

    Starling, David J; Ranalli, Joseph

    2018-02-07

    Combustion research requires the use of state of the art diagnostic tools, including high energy lasers and gated, cooled CCDs. However, these tools may present a cost barrier for laboratories with limited resources. While the cost of high energy lasers and low-noise cameras continues to decline, new imaging technologies are being developed to address both cost and complexity. In this paper, we analyze the use of compressive sensing for flame diagnostics by reconstructing Raman images and calculating mole fractions as a function of radial depth for a highly strained, N 2 -H 2 diffusion flame. We find good agreement with previous results, and discuss the benefits and drawbacks of this technique.

  2. Indentation of elastically soft and plastically compressible solids

    DEFF Research Database (Denmark)

    Needleman, A.; Tvergaard, Viggo; Van der Giessen, E.

    2015-01-01

    The effect of soft elasticity, i.e., a relatively small value of the ratio of Young's modulus to yield strength and plastic compressibility on the indentation of isotropically hardening elastic-viscoplastic solids is investigated. Calculations are carried out for indentation of a perfectly sticking...... the ratio of nominal indentation hardness to yield strength. A linear relation is found between the nominal indentation hardness and the logarithm of the ratio of Young's modulus to yield strength, but with a different coefficient than reported in previous studies. The nominal indentation hardness decreases...

  3. Modelling for Fuel Optimal Control of a Variable Compression Engine

    OpenAIRE

    Nilsson, Ylva

    2007-01-01

    Variable compression engines are a mean to meet the demand on lower fuel consumption. A high compression ratio results in high engine efficiency, but also increases the knock tendency. On conventional engines with fixed compression ratio, knock is avoided by retarding the ignition angle. The variable compression engine offers an extra dimension in knock control, since both ignition angle and compression ratio can be adjusted. The central question is thus for what combination of compression ra...

  4. Cascaded Soliton Compression of Energetic Femtosecond Pulses at 1030 nm

    DEFF Research Database (Denmark)

    Bache, Morten; Zhou, Binbin

    2012-01-01

    We discuss soliton compression with cascaded second-harmonic generation of energetic femtosecond pulses at 1030 nm. We discuss problems encountered with soliton compression of long pulses and show that sub-10 fs compressed pulses can be achieved.......We discuss soliton compression with cascaded second-harmonic generation of energetic femtosecond pulses at 1030 nm. We discuss problems encountered with soliton compression of long pulses and show that sub-10 fs compressed pulses can be achieved....

  5. Critical Boundary of Cascaded Quadratic Soliton Compression in PPLN

    DEFF Research Database (Denmark)

    Guo, Hairun; Zeng, Xianglong; Zhou, Binbin

    2012-01-01

    Cascaded quadratic soliton compression in PPLN is investigated and a general critical soliton number is found as the compression boundary. An optimal-parameter diagram for compression at 1550 nm is presented.......Cascaded quadratic soliton compression in PPLN is investigated and a general critical soliton number is found as the compression boundary. An optimal-parameter diagram for compression at 1550 nm is presented....

  6. GPU Lossless Hyperspectral Data Compression System

    Science.gov (United States)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.

    2014-01-01

    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.

  7. Multishock Compression Properties of Warm Dense Argon

    Science.gov (United States)

    Zheng, Jun; Chen, Qifeng; Yunjun, Gu; Li, Zhiguo; Shen, Zhijun

    2015-10-01

    Warm dense argon was generated by a shock reverberation technique. The diagnostics of warm dense argon were performed by a multichannel optical pyrometer and a velocity interferometer system. The equations of state in the pressure-density range of 20-150 GPa and 1.9-5.3 g/cm3 from the first- to fourth-shock compression were presented. The single-shock temperatures in the range of 17.2-23.4 kK were obtained from the spectral radiance. Experimental results indicates that multiple shock-compression ratio (ηi = ρi/ρ0) is greatly enhanced from 3.3 to 8.8, where ρ0 is the initial density of argon and ρi (i = 1, 2, 3, 4) is the compressed density from first to fourth shock, respectively. For the relative compression ratio (ηi’ = ρi/ρi-1), an interesting finding is that a turning point occurs at the second shocked states under the conditions of different experiments, and ηi’ increases with pressure in lower density regime and reversely decreases with pressure in higher density regime. The evolution of the compression ratio is controlled by the excitation of internal degrees of freedom, which increase the compression, and by the interaction effects between particles that reduce it. A temperature-density plot shows that current multishock compression states of argon have distributed into warm dense regime.

  8. Compression experiments on the TOSKA tokamak

    International Nuclear Information System (INIS)

    Cima, G.; McGuire, K.M.; Robinson, D.C.; Wootton, A.J.

    1980-10-01

    Results from minor radius compression experiments on a tokamak plasma in TOSCA are reported. The compression is achieved by increasing the toroidal field up to twice its initial value in 200μs. Measurements show that particles and magnetic flux are conserved. When the initial energy confinement time is comparable with the compression time, energy gains are greater than for an adiabatic change of state. The total beta value increases. Central beta values approximately 3% are measured when a small major radius compression is superimposed on a minor radius compression. Magnetic field fluctuations are affected: both the amplitude and period decrease. Starting from low energy confinement times, approximately 200μs, increases in confinement times up to approximately 1 ms are measured. The increase in plasma energy results from a large reduction in the power losses during the compression. When the initial energy confinement time is much longer than the compression time, the parameter changes are those expected for an adiabatic change of state. (author)

  9. The Compressed Baryonic Matter experiment

    Directory of Open Access Journals (Sweden)

    Seddiki Sélim

    2014-04-01

    Full Text Available The Compressed Baryonic Matter (CBM experiment is a next-generation fixed-target detector which will operate at the future Facility for Antiproton and Ion Research (FAIR in Darmstadt. The goal of this experiment is to explore the QCD phase diagram in the region of high net baryon densities using high-energy nucleus-nucleus collisions. Its research program includes the study of the equation-of-state of nuclear matter at high baryon densities, the search for the deconfinement and chiral phase transitions and the search for the QCD critical point. The CBM detector is designed to measure both bulk observables with a large acceptance and rare diagnostic probes such as charm particles, multi-strange hyperons, and low mass vector mesons in their di-leptonic decay. The physics program of CBM will be summarized, followed by an overview of the detector concept, a selection of the expected physics performance, and the status of preparation of the experiment.

  10. Rapid reconnection in compressible plasma

    International Nuclear Information System (INIS)

    Heyn, M.F.; Semenov, V.S.

    1996-01-01

    A study of set-up, propagation, and interaction of non-linear and linear magnetohydrodynamic waves driven by magnetic reconnection is presented. The source term of the waves generated by magnetic reconnection is obtained explicitly in terms of the initial background conditions and the local reconnection electric field. The non-linear solution of the problem found earlier, serves as a basis for formulation and extensive investigation of the corresponding linear initial-boundary value problem of compressible magnetohydrodynamics. In plane geometry, the Green close-quote s function of the problem is obtained and its properties are discussed. For the numerical evaluation it turns out that a specific choice of the integration contour in the complex plane of phase velocities is much more effective than the convolution with the real Green close-quote s function. Many complex effects like intrinsic wave coupling, anisotropic propagation characteristics, generation of surface and side wave modes in a finite beta plasma are retained in this analysis. copyright 1996 American Institute of Physics

  11. Experimental study on compression property of regolith analogues

    Science.gov (United States)

    Omura, Tomomi; Nakamura, Akiko M.

    2017-12-01

    The compression property of regolith reflects the strength and porosity of the regolith layer on small bodies and their variations in the layer that largely influence the collisional and thermal evolution of the bodies. We conducted compression experiments and investigated the relationship between the porosity and the compression using fluffy granular samples. We focused on a low-pressure and high-porosity regime. We used tens of μm-sized irregular and spherical powders as analogs of porous regolith. The initial porosity of the samples ranged from 0.80 to 0.53. The uniaxial pressure applied to the samples lays in the range from 30 to 4 × 105 Pa. The porosity of the samples remained at their initial values below a threshold pressure and then decreased when the pressure exceeded the threshold. We defined this uniaxial pressure at the threshold as "yield strength". The yield strength increased as the initial porosity of a sample decreased. The yield strengths of samples consisting of irregular particles did not significantly depend on their size distributions when the samples had the same initial porosity. We compared the results of our experiments with a previously proposed theoretical model. We calculated the average interparticle force acting on contact points of constituent particles under the uniaxial pressure of yield strength using the theoretical model and compared it with theoretically estimated forces required to roll or slide the particles. The calculated interparticle force was larger than the rolling friction force and smaller than the sliding friction force. The yield strength of regolith may be constrained by these forces. Our results may be useful for planetary scientists to estimate the depth above which the porosity of a regolith layer is almost equal to that of the regolith surface and to interpret the compression property of an asteroid surface obtained by a lander.

  12. Rhythm analysis and charging during chest compressions reduces compression pause time.

    Science.gov (United States)

    Partridge, R; Tan, Q; Silver, A; Riley, M; Geheb, F; Raymond, R

    2015-05-01

    Prolonged chest compression interruptions immediately preceding and following a defibrillation shock reduce shock success and survival after cardiac arrest. We tested the hypothesis that compression pauses would be shorter using an AED equipped with a new Analysis during Compressions with Fast Reconfirmation (ADC-FR) technology, which features automated rhythm analysis and charging during compressions with brief reconfirmation analysis during compression pause, compared with standard AED mode. BLS-certified emergency medical technicians (EMTs) worked in pairs and performed two trials of simulated cardiac resuscitation with a chest compression sensing X Series defibrillator (ZOLL Medical). Each pair was randomized to perform a trial of eight 2-min compression intervals (randomly assigned to receive four shockable and four non-shockable rhythms) with the defibrillator in standard AED mode and another trial in ADC-FR mode. Subjects were advised to follow defibrillator prompts, defibrillate if "shock advised," and switch compressors every two intervals. Compression quality data were reviewed using RescueNet Code Review (ZOLL Medical) and analyzed using paired t-tests. Thirty-two EMT-basic prehospital providers (59% male; median 25 years age [IQR 22-27]) participated in the study. End of interval compression interruptions were significantly reduced with ADC-FR vs. AED mode (pcharging are reduced with use of a novel defibrillator technology, ADC-FR, which features automated rhythm analysis and charging during compressions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Properties of compressible elastica from relativistic analogy.

    Science.gov (United States)

    Oshri, Oz; Diamant, Haim

    2016-01-21

    Kirchhoff's kinetic analogy relates the deformation of an incompressible elastic rod to the classical dynamics of rigid body rotation. We extend the analogy to compressible filaments and find that the extension is similar to the introduction of relativistic effects into the dynamical system. The extended analogy reveals a surprising symmetry in the deformations of compressible elastica. In addition, we use known results for the buckling of compressible elastica to derive the explicit solution for the motion of a relativistic nonlinear pendulum. We discuss cases where the extended Kirchhoff analogy may be useful for the study of other soft matter systems.

  14. An efficient compression scheme for bitmap indices

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  15. Compressed Subsequence Matching and Packed Tree Coloring

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2017-01-01

    We present a new algorithm for subsequence matching in grammar compressed strings. Given a grammar of size n compressing a string of size N and a pattern string of size m over an alphabet of size \\(\\sigma \\), our algorithm uses \\(O(n+\\frac{n\\sigma }{w})\\) space and \\(O(n+\\frac{n\\sigma }{w}+m\\log ...... a new data structure that allows us to efficiently find the next occurrence of a given character after a given position in a compressed string. This data structure in turn is based on a new data structure for the tree color problem, where the node colors are packed in bit strings....

  16. Compressive sensing with a microwave photonic filter

    DEFF Research Database (Denmark)

    Chen, Ying; Yu, Xianbin; Chi, Hao

    2015-01-01

    In this letter, we present a novel approach to realizing photonics-assisted compressive sensing (CS) with the technique of microwave photonic fi ltering. In the proposed system, an input spectrally sparse signal to be captured and a random sequence are modulated on an optical carrier via two Mach...... to a frequency- dependent power fading, low-pass fi ltering required in the CS is then realized. A proof-of-concept ex- periment for compressive sampling and recovery of a signal containing three tones at 310 MHz, 1 GHz and 2 GHz with a compression factor up to 10 is successfully demonstrated. More simulation...

  17. Radial and axial compression of pure electron

    International Nuclear Information System (INIS)

    Park, Y.; Soga, Y.; Mihara, Y.; Takeda, M.; Kamada, K.

    2013-01-01

    Experimental studies are carried out on compression of the density distribution of a pure electron plasma confined in a Malmberg-Penning Trap in Kanazawa University. More than six times increase of the on-axis density is observed under application of an external rotating electric field that couples to low-order Trivelpiece-Gould modes. Axial compression of the density distribution with the axial length of a factor of two is achieved by controlling the confining potential at both ends of the plasma. Substantial increase of the axial kinetic energy is observed during the axial compression. (author)

  18. Survived ileocecal blowout from compressed air.

    Science.gov (United States)

    Weber, Marco; Kolbus, Frank; Dressler, Jan; Lessig, Rüdiger

    2011-03-01

    Industrial accidents with compressed air entering the gastro-intestinal tract often run fatally. The pressures usually over-exceed those used by medical applications such as colonoscopy and lead to vast injuries of the intestines with high mortality. The case described in this report is of a 26-year-old man who was harmed by compressed air that entered through the anus. He survived because of fast emergency operation. This case underlines necessity of explicit instruction considering hazards handling compressed air devices to maintain safety at work. Further, our observations support the hypothesis that the mucosa is the most elastic layer of the intestine wall.

  19. Compressive Load Resistance Characteristics of Rice Grain

    OpenAIRE

    Sumpun Chaitep; Chaiy R. Metha Pathawee; Pipatpong Watanawanyoo

    2008-01-01

    Investigation was made to observe the compressive load property of rice gain both rough rice and brown grain. Six rice varieties (indica and japonica) were examined with the moisture content at 10-12%. A compressive load with reference to a principal axis normal to the thickness of the grain were conducted at selected inclined angles of 0°, 15°, 30°, 45°, 60° and 70°. The result showed the compressive load resistance of rice grain based on its characteristic of yield s...

  20. Evolution Of Nonlinear Waves in Compressing Plasma

    Energy Technology Data Exchange (ETDEWEB)

    P.F. Schmit, I.Y. Dodin, and N.J. Fisch

    2011-05-27

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size {Delta} during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches {Delta}. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  1. The effect of logarithmic compression on estimation of the Nakagami parameter for ultrasonic tissue characterization: a simulation study.

    Science.gov (United States)

    Tsui, Po-Hsiang; Wang, Shyh-Hau; Huang, Chih-Chung

    2005-07-21

    Previous studies have demonstrated that the Nakagami parameter estimated using the envelopes of backscattered ultrasound is useful in detecting variations in the concentration of scatterers in tissues. The signal processing in those studies was linear, whereas nonlinear logarithmic compression is routinely employed in existing ultrasonic scanners. We therefore explored the effect of the logarithmic compression on the estimation of the Nakagami parameter in this study. Computer simulations were used to produce backscattered signals of various scatterer concentrations for the estimation of the Nakagami parameters before and after applying the logarithmic compression on the backscattered envelopes. The simulated results showed that the logarithmic compression would move the statistics of the backscattered envelopes towards post-Rayleigh distributions for most scatterer concentrations. Moreover, the Nakagami parameter calculated using compressed backscattered envelopes is more sensitive than that calculated using uncompressed envelopes in differentiating variations in the scatterer concentration, making the former better at quantifying the scatterer concentration in biological tissues.

  2. Soliton compression to few-cycle pulses with a high quality factor by engineering cascaded quadratic nonlinearities

    DEFF Research Database (Denmark)

    Zeng, Xianglong; Guo, Hairun; Zhou, Binbin

    2012-01-01

    We propose an efficient approach to improve few-cycle soliton compression with cascaded quadratic nonlinearities by using an engineered multi-section structure of the nonlinear crystal. By exploiting engineering of the cascaded quadratic nonlinearities, in each section soliton compression...... with a low effective order is realized, and high-quality few-cycle pulses with large compression factors are feasible. Each subsequent section is designed so that the compressed pulse exiting the previous section experiences an overall effective self-defocusing cubic nonlinearity corresponding to a modest...... soliton order, which is kept larger than unity to ensure further compression. This is done by increasing the cascaded quadratic nonlinearity in the new section with an engineered reduced residual phase mismatch. The low soliton orders in each section ensure excellent pulse quality and high efficiency...

  3. 22 CFR 40.91 - Certain aliens previously removed.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Certain aliens previously removed. 40.91... IMMIGRANTS UNDER THE IMMIGRATION AND NATIONALITY ACT, AS AMENDED Aliens Previously Removed § 40.91 Certain aliens previously removed. (a) 5-year bar. An alien who has been found inadmissible, whether as a result...

  4. Determining root correspondence between previously and newly detected objects

    Science.gov (United States)

    Paglieroni, David W.; Beer, N Reginald

    2014-06-17

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  5. Compression Behavior of Single-Layer Graphenes

    Czech Academy of Sciences Publication Activity Database

    Frank, Otakar; Tsoukleri, G.; Parthenios, J.; Papagelis, K.; Riaz, I.; Jalil, R.; Novoselov, K. S.; Galiotis, C.

    2010-01-01

    Roč. 4, č. 6 (2010), s. 3131-3138 ISSN 1936-0851 Institutional research plan: CEZ:AV0Z40400503 Keywords : buckling * compression * graphene Subject RIV: CG - Electrochemistry Impact factor: 9.855, year: 2010

  6. Compression Behavior of High Performance Polymeric Fibers

    National Research Council Canada - National Science Library

    Kumar, Satish

    2003-01-01

    Hydrogen bonding has proven to be effective in improving the compressive strength of rigid-rod polymeric fibers without resulting in a decrease in tensile strength while covalent crosslinking results in brittle fibers...

  7. Real-Time Compression Of Digital Video

    Science.gov (United States)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Shalkhauser, Mary JO; Marcopoli, Vincent R.

    1995-01-01

    Enhanced DPCM video compression algorithm utilizes non-uniform quantizer, non-adaptive predictor, and multi-level Huffman coder to substantially reduce data rate below that achievable with conventional DPCM. Images reconstructed without noticeable degradation.

  8. Compressed and Practical Data Structures for Strings

    DEFF Research Database (Denmark)

    Christiansen, Anders Roy

    in the following. Finger Search in Grammar-Compressed Strings. Grammar-based compression, where one replaces a long string by a small context-free grammar that generates the string, is a simple and powerful paradigm that captures many popular compression schemes. Given a grammar, the random access problem...... string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets of updates. To achieve these results, we...... revisit the dynamic partial sums problem and the substring concatenation problem. We present new optimal or near optimal bounds for these problems. Plugging in our new results we also immediately obtain new bounds for the string indexing for patterns with wildcards problem and the dynamic text and static...

  9. optimizing compressive strength characteristics of hollow building

    African Journals Online (AJOL)

    eobe

    mm2. ... is suggested therefore, that the optimum replacement of sand with granite quarry dust as fine aggregates should be. 15% of the ... Keywords: Keywords: hollow building Blocks, granite dust, sand, partial replacement, compressive strength.

  10. Video Compression Algorithms for Transmission and Video

    National Research Council Canada - National Science Library

    Zakhor, Avideh

    1997-01-01

    .... We developed a real time software only scalable video compression codec. We have also optimized the scalable coder for transmission over wireless links by jointly optimizing the channel and source coders...

  11. Interactive calculation procedures for mixed compression inlets

    Science.gov (United States)

    Reshotko, Eli

    1983-01-01

    The proper design of engine nacelle installations for supersonic aircraft depends on a sophisticated understanding of the interactions between the boundary layers and the bounding external flows. The successful operation of mixed external-internal compression inlets depends significantly on the ability to closely control the operation of the internal compression portion of the inlet. This portion of the inlet is one where compression is achieved by multiple reflection of oblique shock waves and weak compression waves in a converging internal flow passage. However weak these shocks and waves may seem gas-dynamically, they are of sufficient strength to separate a laminar boundary layer and generally even strong enough for separation or incipient separation of the turbulent boundary layers. An understanding was developed of the viscous-inviscid interactions and of the shock wave boundary layer interactions and reflections.

  12. Pulse power applications of flux compression generators

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.

    1981-01-01

    Characteristics are presented for two different types of explosive driven flux compression generators and a megavolt pulse transformer. Status reports are given for rail gun and plasma focus programs for which the generators serve as power sources

  13. Efficiency of Compressed Air Energy Storage

    DEFF Research Database (Denmark)

    Elmegaard, Brian; Brix, Wiebke

    2011-01-01

    The simplest type of a Compressed Air Energy Storage (CAES) facility would be an adiabatic process consisting only of a compressor, a storage and a turbine, compressing air into a container when storing and expanding when producing. This type of CAES would be adiabatic and would if the machines...... were reversible have a storage efficiency of 100%. However, due to the specific capacity of the storage and the construction materials the air is cooled during and after compression in practice, making the CAES process diabatic. The cooling involves exergy losses and thus lowers the efficiency...... of the storage significantly. The efficiency of CAES as an electricity storage may be defined in several ways, we discuss these and find that the exergetic efficiency of compression, storage and production together determine the efficiency of CAES. In the paper we find that the efficiency of the practical CAES...

  14. Compressed Sensing for Wideband Cognitive Radios

    National Research Council Canada - National Science Library

    Tian, Zhi; Giannakis, Georgios B

    2007-01-01

    .... Capitalizing on the sparseness of the signal spectrum in open-access networks, this paper develops compressed sensing techniques tailored for the coarse sensing task of spectrum hole identification...

  15. Seneca Compressed Air Energy Storage (CAES) Project

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-11-30

    This document provides specifications for the process air compressor for a compressed air storage project, requests a budgetary quote, and provides supporting information, including compressor data, site specific data, water analysis, and Seneca CAES value drivers.

  16. Smoking and the compression of morbidity

    NARCIS (Netherlands)

    W.J. Nusselder (Wilma); C.W.N. Looman (Caspar); P.J. Marang-van de Mheen; H. van de Mheen (Dike); J.P. Mackenbach (Johan)

    2000-01-01

    textabstractOBJECTIVE: To examine whether eliminating smoking will lead to a reduction in the number of years lived with disability (that is, absolute compression of morbidity). DESIGN: Multistate life table calculations based on the longitudinal GLOBE study (the

  17. Toward compression of small cell population: harnessing stress in passive regions of dielectric elastomer actuators

    Science.gov (United States)

    Poulin, Alexandre; Rosset, Samuel; Shea, Herbert

    2014-03-01

    We present a dielectric elastomer actuator (DEA) for in vitro analysis of mm2 biological samples under periodic compressive stress. Understanding how mechanical stimuli affect cell functions could lead to significant advances in diseases diagnosis and drugs development. We previously reported an array of 72 micro-DEAs on a chip to apply a periodic stretch to cells. To diversify our cell mechanotransduction toolkit we have developed an actuator for periodic compression of small cell populations. The device is based on a novel design which exploits the effects of non-equibiaxial pre-stretch and takes advantage of the stress induced in passive regions of DEAs. The device consists of two active regions separated by a 2mm x 2mm passive area. When connected to an AC high-voltage source, the two active regions periodically compress the passive region. Due to the non-equibiaxial pre-stretch it induces uniaxial compressive strain greater than 10%. Cells adsorbed on top of this passive gap would experience the same uniaxial compressive stain. The electrodes configuration confines the electric field and prevents it from reaching the biological sample. A thin layer of silicone is casted on top of the device to ensure a biocompatible environment. This design provides several advantages over alternative technologies such as high optical transparency of the area of interest (passive region under compression) and its potential for miniaturization and parallelization.

  18. Telephone transmission of 20-channel digital electroencephalogram using lossless data compression.

    Science.gov (United States)

    Rozza, L; Tonella, P; Bertamini, C; Orrico, D; Antoniol, G; Castellaro, L

    1996-01-01

    The use of telecommunications for computer-assisted transmission of neurophysiological signals is a relatively new practice. With the development of digital technology, it is now possible to record electroencephalograms (EEGs) in digital form. Previous reports have demonstrated the possibility of real-time telephone transmission of a limited number of EEG channels. To assess the effectiveness of specific data-compression software to improve the transmission of digital 20-channel EEG records over ordinary public telephone lines. A prototype system was built to transmit digital EEG signals from one computer to another using two 14.4-kbps modems and proprietary lossless data-compression software. Forty compressed digital EEG records of 20 channels each were sent from different locations at variable distances using "plain old telephone service" (POTS). The mean compression ratio was 2.2 to 2.8:1 using a sampling frequency of 128 Hz and 2.8:1 at a sampling rate of 256 Hz. Transmission time was reduced proportionately. Although this study used a store-and-forward approach, the results suggest that it may be possible to transmit a large number of compressed EEG channels in real time using data compression.

  19. Study on Compression Induced Contrast in X-ray Mammograms Using Breast Mimicking Phantoms

    Directory of Open Access Journals (Sweden)

    A. B. M. Aowlad Hossain

    2015-09-01

    Full Text Available X-ray mammography is commonly used to scan cancer or tumors in breast using low dose x-rays. But mammograms suffer from low contrast problem. The breast is compressed in mammography to reduce x-ray scattering effects. As tumors are stiffer than normal tissues, they undergo smaller deformation under compression. Therefore, image intensity at tumor region may change less than the background tissues. In this study, we try to find out compression induced contrast from multiple mammographic images of tumorous breast phantoms taken with different compressions. This is an extended work of our previous simulation study with experiment and more analysis. We have used FEM models for synthetic phantom and constructed a phantom using agar and n-propanol for simulation and experiment. The x-ray images of deformed phantoms have been obtained under three compression steps and a non-rigid registration technique has been applied to register these images. It is noticeably observed that the image intensity changes at tumor are less than those at surrounding which induce a detectable contrast. Addition of this compression induced contrast to the simulated and experimental images has improved their original contrast by a factor of about 1.4

  20. Compressed Representations of Conjunctive Query Results

    OpenAIRE

    Deep, Shaleen; Koutris, Paraschos

    2017-01-01

    Relational queries, and in particular join queries, often generate large output results when executed over a huge dataset. In such cases, it is often infeasible to store the whole materialized output if we plan to reuse it further down a data processing pipeline. Motivated by this problem, we study the construction of space-efficient compressed representations of the output of conjunctive queries, with the goal of supporting the efficient access of the intermediate compressed result for a giv...