WorldWideScience

Sample records for previously jpeg compressed

  1. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  2. Comparison of JPEG and wavelet compression on intraoral digital radiographic images

    International Nuclear Information System (INIS)

    Kim, Eun Kyung

    2004-01-01

    To determine the proper image compression method and ratio without image quality degradation in intraoral digital radiographic images, comparing the discrete cosine transform (DCT)-based JPEG with the wavelet-based JPEG 2000 algorithm. Thirty extracted sound teeth and thirty extracted teeth with occlusal caries were used for this study. Twenty plaster blocks were made with three teeth each. They were radiographically exposed using CDR sensors (Schick Inc., Long Island, USA). Digital images were compressed to JPEG format, using Adobe Photoshop v. 7.0 and JPEG 2000 format using Jasper program with compression ratios of 5 : 1, 9 : 1, 14 : 1, 28 : 1 each. To evaluate the lesion detectability, receiver operating characteristic (ROC) analysis was performed by the three oral and maxillofacial radiologists. To evaluate the image quality, all the compressed images were assessed subjectively using 5 grades, in comparison to the original uncompressed images. Compressed images up to compression ratio of 14: 1 in JPEG and 28 : 1 in JPEG 2000 showed nearly the same the lesion detectability as the original images. In the subjective assessment of image quality, images up to compression ratio of 9 : 1 in JPEG and 14 : 1 in JPEG 2000 showed minute mean paired differences from the original images. The results showed that the clinically acceptable compression ratios were up to 9 : 1 for JPEG and 14 : 1 for JPEG 2000. The wavelet-based JPEG 2000 is a better compression method, comparing to DCT-based JPEG for intraoral digital radiographic images.

  3. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  4. JPEG vs. JPEG2000: benchmarking with dermatological images.

    Science.gov (United States)

    Guarneri, F; Vaccaro, M; Guarneri, C; Cannavò, S P

    2014-02-01

    Despite the importance of images in the discipline and the diffusion of digital imaging devices, the issue of image compression in dermatology was discussed only in few studies, which yielded results often not comparable, and left some unanswered questions. To evaluate and compare the performance of the JPEG and JPEG2000 algorithms for compression of dermatological images. Nineteen macroscopic and fifteen videomicroscopic images of skin lesions were compressed with JPEG and JPEG2000 at 18 different compression rates, from 90% to 99.5%. Compressed images were shown, next to uncompressed versions, to three dermatologists with different experience, who judged quality and suitability for educational/scientific and diagnostic purposes. Moreover, alterations and quality were evaluated by calculation of mean 'distance' of pixel colors between compressed and original images and by peak signal-to-noise ratio, respectively. JPEG2000 was qualitatively better than JPEG at all compression rates, particularly highest ones, as shown by dermatologists' ratings and objective parameters. Agreement between raters was high, but with some differences in specific cases, showing that different professional experience can influence judgement on images. In consideration of its high qualitative performance and wide diffusion, JPEG2000 represents an optimal solution for the compression of digital dermatological images. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Efficient JPEG 2000 Image Compression Scheme for Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Halim Sghaier

    2011-08-01

    Full Text Available When using wireless sensor networks for real-time data transmission, some critical points should be considered. Restricted computational power, reduced memory, narrow bandwidth and energy supplied present strong limits in sensor nodes. Therefore, maximizing network lifetime and minimizing energy consumption are always optimization goals. To overcome the computation and energy limitation of individual sensor nodes during image transmission, an energy efficient image transport scheme is proposed, taking advantage of JPEG2000 still image compression standard using MATLAB and C from Jasper. JPEG2000 provides a practical set of features, not necessarily available in the previous standards. These features were achieved using techniques: the discrete wavelet transform (DWT, and embedded block coding with optimized truncation (EBCOT. Performance of the proposed image transport scheme is investigated with respect to image quality and energy consumption. Simulation results are presented and show that the proposed scheme optimizes network lifetime and reduces significantly the amount of required memory by analyzing the functional influence of each parameter of this distributed image compression algorithm.

  6. JPEG2000 vs. full frame wavelet packet compression for smart card medical records.

    Science.gov (United States)

    Leehan, Joaquín Azpirox; Lerallut, Jean-Francois

    2006-01-01

    This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.

  7. Clinical evaluation of JPEG2000 compression for digital mammography

    Science.gov (United States)

    Sung, Min-Mo; Kim, Hee-Joung; Kim, Eun-Kyung; Kwak, Jin-Young; Yoo, Jae-Kyung; Yoo, Hyung-Sik

    2002-06-01

    Medical images, such as computed radiography (CR), and digital mammographic images will require large storage facilities and long transmission times for picture archiving and communications system (PACS) implementation. American College of Radiology and National Equipment Manufacturers Association (ACR/NEMA) group is planning to adopt a JPEG2000 compression algorithm in digital imaging and communications in medicine (DICOM) standard to better utilize medical images. The purpose of the study was to evaluate the compression ratios of JPEG2000 for digital mammographic images using peak signal-to-noise ratio (PSNR), receiver operating characteristic (ROC) analysis, and the t-test. The traditional statistical quality measures such as PSNR, which is a commonly used measure for the evaluation of reconstructed images, measures how the reconstructed image differs from the original by making pixel-by-pixel comparisons. The ability to accurately discriminate diseased cases from normal cases is evaluated using ROC curve analysis. ROC curves can be used to compare the diagnostic performance of two or more reconstructed images. The t test can be also used to evaluate the subjective image quality of reconstructed images. The results of the t test suggested that the possible compression ratios using JPEG2000 for digital mammographic images may be as much as 15:1 without visual loss or with preserving significant medical information at a confidence level of 99%, although both PSNR and ROC analyses suggest as much as 80:1 compression ratio can be achieved without affecting clinical diagnostic performance.

  8. Low-complexity Compression of High Dynamic Range Infrared Images with JPEG compatibility

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-01-01

    data size, then we include the raw residual image instead. If the residual image contains only zero values or the quality factor for it is 0 then we do not include the residual image into the header. Experimental results show that compared with JPEG-XT Part 6 with ’global Reinhard’ tone-mapping....... Then we compress each image by a JPEG baseline encoder and include the residual image bit stream into the application part of JPEG header of the base image. As a result, the base image can be reconstructed by JPEG baseline decoder. If the JPEG bit stream size of the residual image is higher than the raw...

  9. A JPEG backward-compatible HDR image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  10. Effects of JPEG data compression on magnetic resonance imaging evaluation of small vessels ischemic lesions of the brain

    International Nuclear Information System (INIS)

    Kuriki, Paulo Eduardo de Aguiar; Abdala, Nitamar; Nogueira, Roberto Gomes; Carrete Junior, Henrique; Szejnfeld, Jacob

    2006-01-01

    Objective: to establish the maximum achievable JPEG compression ratio without affecting quantitative and qualitative magnetic resonance imaging analysis of ischemic lesion in small vessels of the brain. Material and method: fifteen DICOM images were converted to JPEG with a compression ratio of 1:10 to 1:60 and were assessed together with the original images by three neuro radiologists. The number, morphology and signal intensity of the lesions were analyzed. Results: lesions were properly identified up to a 1:30 ratio. More lesions were identified with a 1:10 ratio then in the original images. Morphology and edges were properly evaluated up toa 1:40 ratio. Compression did not affect signal. Conclusion: small lesions were identified ( < 2 mm ) and in all compression ratios the JPEG algorithm generated image noise that misled observers to identify more lesions in JPEG images then in DICOM images, thus generating false-positive results.(author)

  11. Context-dependent JPEG backward-compatible high-dynamic range image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  12. Evaluation of compression ratio using JPEG 2000 on diagnostic images in dentistry

    International Nuclear Information System (INIS)

    Jung, Gi Hun; Han, Won Jeong; Yoo, Dong Soo; Kim, Eun Kyung; Choi, Soon Chul

    2005-01-01

    To find out the proper compression ratios without degrading image quality and affecting lesion detectability on diagnostic images used in dentistry compressed with JPEG 2000 algorithm. Sixty Digora peri apical images, sixty panoramic computed radiographic (CR) images, sixty computed tomography (CT) images, and sixty magnetic resonance (MR) images were compressed into JPEG 2000 with ratios of 10 levels from 5:1 to 50:1. To evaluate the lesion detectability, the images were graded with 5 levels (1 : definitely absent ; 2 : probably absent ; 3 : equivocal ; 4 : probably present ; 5 : definitely present), and then receiver operating characteristic analysis was performed using the original image as a gold standard. Also to evaluate subjectively the image quality, the images were graded with 5 levels (1 : definitely unacceptable ; 2 : probably unacceptable ; 3 : equivocal ; 4 : probably acceptable ; 5 : definitely acceptable), and then paired t-test was performed. In Digora, CR panoramic and CT images, compressed images up to ratios of 15:1 showed nearly the same lesion detectability as original images, and in MR images, compressed images did up to ratios of 25:1. In Digora and CR panoramic images, compressed images up to ratios of 5:1 showed little difference between the original and reconstructed images in subjective assessment of image quality. In CT images, compressed images did up to ratios of 10:1 and in MR images up to ratios of 15:1. We considered compression ratios up to 5:1 in Digora and CR panoramic images, up to 10:1 in CT images, up to 15:1 in MR images as clinically applicable compression ratios.

  13. Design of a motion JPEG (M/JPEG) adapter card

    Science.gov (United States)

    Lee, D. H.; Sudharsanan, Subramania I.

    1994-05-01

    In this paper we describe a design of a high performance JPEG (Joint Photographic Experts Group) Micro Channel adapter card. The card, tested on a range of PS/2 platforms (models 50 to 95), can complete JPEG operations on a 640 by 240 pixel image within 1/60 of a second, thus enabling real-time capture and display of high quality digital video. The card accepts digital pixels for either a YUV 4:2:2 or an RGB 4:4:4 pixel bus and has been shown to handle up to 2.05 MBytes/second of compressed data. The compressed data is transmitted to a host memory area by Direct Memory Access operations. The card uses a single C-Cube's CL550 JPEG processor that complies with the baseline JPEG. We give broad descriptions of the hardware that controls the video interface, CL550, and the system interface. Some critical design points that enhance the overall performance of the M/JPEG systems are pointed out. The control of the adapter card is achieved by an interrupt driven software that runs under DOS. The software performs a variety of tasks that include change of color space (RGB or YUV), change of quantization and Huffman tables, odd and even field control and some diagnostic operations.

  14. INCREASE OF STABILITY AT JPEG COMPRESSION OF THE DIGITAL WATERMARKS EMBEDDED IN STILL IMAGES

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-07-01

    Full Text Available Subject of Research. The paper deals with creation and research of method for increasing stability at JPEG compressing of digital watermarks embedded in still images. Method. A new algorithm of digital watermarking for still images which embeds digital watermark into a still image via modification of frequency coefficients for Hadamard discrete transformation is presented. The choice of frequency coefficients for embedding of a digital watermark is based on existence of sharp change of their values after modification at the maximum compression of JPEG. The choice of blocks of pixels for embedding is based on the value of their entropy. The new algorithm was subjected to the analysis of resistance to an image compression, noising, filtration, change of size, color and histogram equalization. Elham algorithm possessing a good resistance to JPEG compression was chosen for comparative analysis. Nine gray-scale images were selected as objects for protection. Obscurity of the distortions embedded in them was defined on the basis of the peak value of a signal to noise ratio which should be not lower than 43 dB for obscurity of the brought distortions. Resistibility of embedded watermark was determined by the Pearson correlation coefficient, which value should not be below 0.5 for the minimum allowed stability. The algorithm of computing experiment comprises: watermark embedding into each test image by the new algorithm and Elham algorithm; introducing distortions to the object of protection; extracting of embedded information with its subsequent comparison with the original. Parameters of the algorithms were chosen so as to provide approximately the same level of distortions introduced into the images. Main Results. The method of preliminary processing of digital watermark presented in the paper makes it possible to reduce significantly the volume of information embedded in the still image. The results of numerical experiment have shown that the

  15. JPEG XS call for proposals subjective evaluations

    Science.gov (United States)

    McNally, David; Bruylants, Tim; Willème, Alexandre; Ebrahimi, Touradj; Schelkens, Peter; Macq, Benoit

    2017-09-01

    In March 2016 the Joint Photographic Experts Group (JPEG), formally known as ISO/IEC SC29 WG1, issued a call for proposals soliciting compression technologies for a low-latency, lightweight and visually transparent video compression scheme. Within the JPEG family of standards, this scheme was denominated JPEG XS. The subjective evaluation of visually lossless compressed video sequences at high resolutions and bit depths poses particular challenges. This paper describes the adopted procedures, the subjective evaluation setup, the evaluation process and summarizes the obtained results which were achieved in the context of the JPEG XS standardization process.

  16. An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process

    Science.gov (United States)

    Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre

    2015-02-01

    This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.

  17. Effects of JPEG data compression on magnetic resonance imaging evaluation of small vessels ischemic lesions of the brain; Efeitos da compressao de dados JPEG na avaliacao de lesoes vasculares cerebrais isquemicas de pequenos vasos em ressonancia magnetica

    Energy Technology Data Exchange (ETDEWEB)

    Kuriki, Paulo Eduardo de Aguiar; Abdala, Nitamar; Nogueira, Roberto Gomes; Carrete Junior, Henrique; Szejnfeld, Jacob [Universidade Federal de Sao Paulo (UNIFESP/EPM), SP (Brazil). Dept. de Diagnostico por Imagem]. E-mail: paulokuriki@gmail.com

    2006-01-15

    Objective: to establish the maximum achievable JPEG compression ratio without affecting quantitative and qualitative magnetic resonance imaging analysis of ischemic lesion in small vessels of the brain. Material and method: fifteen DICOM images were converted to JPEG with a compression ratio of 1:10 to 1:60 and were assessed together with the original images by three neuro radiologists. The number, morphology and signal intensity of the lesions were analyzed. Results: lesions were properly identified up to a 1:30 ratio. More lesions were identified with a 1:10 ratio then in the original images. Morphology and edges were properly evaluated up toa 1:40 ratio. Compression did not affect signal. Conclusion: small lesions were identified ( < 2 mm ) and in all compression ratios the JPEG algorithm generated image noise that misled observers to identify more lesions in JPEG images then in DICOM images, thus generating false-positive results.(author)

  18. Clinical evaluation of the JPEG2000 compression rate of CT and MR images for long term archiving in PACS

    International Nuclear Information System (INIS)

    Cha, Soon Joo; Kim, Sung Hwan; Kim, Yong Hoon

    2006-01-01

    We wanted to evaluate an acceptable compression rate of JPEG2000 for long term archiving of CT and MR images in PACS. Nine CT images and 9 MR images that had small or minimal lesions were randomly selected from the PACS at our institute. All the images are compressed with rates of 5:1, 10:1, 20:1, 40:1 and 80:1 by the JPEG2000 compression protocol. Pairs of original and compressed images were compared by 9 radiologists who were working independently. We designed a JPEG2000 viewing program for comparing two images on one monitor system for performing easy and quick evaluation. All the observers performed the comparison study twice on 5 mega pixel grey scale LCD monitors and 2 mega pixel color LCD monitors, respectively. The PSNR (Peak Signal to Noise Ratio) values were calculated for making quantitative comparisions. On MR and CT, all the images with 5:1 compression images showed no difference from the original images by all 9 observers and only one observer could detect a image difference on one CT image for 10:1 compression on only the 5 mega pixel monitor. For the 20:1 compression rate, clinically significant image deterioration was found in 50% of the images on the 5M pixel monitor study, and in 30% of the images on the 2M pixel monitor. PSNR values larger than 44 dB were calculated for all the compressed images. The clinically acceptable image compression rate for long term archiving by the JPEG2000 compression protocol is 10:1 for MR and CT, and if this is applied to PACS, it would reduce the cost and responsibility of the system

  19. Steganalysis based on JPEG compatibility

    Science.gov (United States)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2001-11-01

    In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.

  20. Comparing subjective and objective quality assessment of HDR images compressed with JPEG-XT

    DEFF Research Database (Denmark)

    Mantel, Claire; Ferchiu, Stefan Catalin; Forchhammer, Søren

    2014-01-01

    In this paper a subjective test in which participants evaluate the quality of JPEG-XT compressed HDR images is presented. Results show that for the selected test images and display, the subjective quality reached its saturation point starting around 3bpp. Objective evaluations are obtained...

  1. Dynamic code block size for JPEG 2000

    Science.gov (United States)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  2. Predicting the fidelity of JPEG2000 compressed CT images using DICOM header information

    International Nuclear Information System (INIS)

    Kim, Kil Joong; Kim, Bohyoung; Lee, Hyunna; Choi, Hosik; Jeon, Jong-June; Ahn, Jeong-Hwan; Lee, Kyoung Ho

    2011-01-01

    Purpose: To propose multiple logistic regression (MLR) and artificial neural network (ANN) models constructed using digital imaging and communications in medicine (DICOM) header information in predicting the fidelity of Joint Photographic Experts Group (JPEG) 2000 compressed abdomen computed tomography (CT) images. Methods: Our institutional review board approved this study and waived informed patient consent. Using a JPEG2000 algorithm, 360 abdomen CT images were compressed reversibly (n = 48, as negative control) or irreversibly (n = 312) to one of different compression ratios (CRs) ranging from 4:1 to 10:1. Five radiologists independently determined whether the original and compressed images were distinguishable or indistinguishable. The 312 irreversibly compressed images were divided randomly into training (n = 156) and testing (n = 156) sets. The MLR and ANN models were constructed regarding the DICOM header information as independent variables and the pooled radiologists' responses as dependent variable. As independent variables, we selected the CR (DICOM tag number: 0028, 2112), effective tube current-time product (0018, 9332), section thickness (0018, 0050), and field of view (0018, 0090) among the DICOM tags. Using the training set, an optimal subset of independent variables was determined by backward stepwise selection in a four-fold cross-validation scheme. The MLR and ANN models were constructed with the determined independent variables using the training set. The models were then evaluated on the testing set by using receiver-operating-characteristic (ROC) analysis regarding the radiologists' pooled responses as the reference standard and by measuring Spearman rank correlation between the model prediction and the number of radiologists who rated the two images as distinguishable. Results: The CR and section thickness were determined as the optimal independent variables. The areas under the ROC curve for the MLR and ANN predictions were 0.91 (95% CI; 0

  3. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    Science.gov (United States)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  4. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    Science.gov (United States)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  5. New quantization matrices for JPEG steganography

    Science.gov (United States)

    Yildiz, Yesna O.; Panetta, Karen; Agaian, Sos

    2007-04-01

    Modern steganography is a secure communication of information by embedding a secret-message within a "cover" digital multimedia without any perceptual distortion to the cover media, so the presence of the hidden message is indiscernible. Recently, the Joint Photographic Experts Group (JPEG) format attracted the attention of researchers as the main steganographic format due to the following reasons: It is the most common format for storing images, JPEG images are very abundant on the Internet bulletin boards and public Internet sites, and they are almost solely used for storing natural images. Well-known JPEG steganographic algorithms such as F5 and Model-based Steganography provide high message capacity with reasonable security. In this paper, we present a method to increase security using JPEG images as the cover medium. The key element of the method is using a new parametric key-dependent quantization matrix. This new quantization table has practically the same performance as the JPEG table as far as compression ratio and image statistics. The resulting image is indiscernible from an image that was created using the JPEG compression algorithm. This paper presents the key-dependent quantization table algorithm and then analyzes the new table performance.

  6. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  7. Effect of JPEG2000 mammogram compression on microcalcifications segmentation

    International Nuclear Information System (INIS)

    Georgiev, V.; Arikidis, N.; Karahaliou, A.; Skiadopoulos, S.; Costaridou, L.

    2012-01-01

    The purpose of this study is to investigate the effect of mammographic image compression on the automated segmentation of individual microcalcifications. The dataset consisted of individual microcalcifications of 105 clusters originating from mammograms of the Digital Database for Screening Mammography. A JPEG2000 wavelet-based compression algorithm was used for compressing mammograms at 7 compression ratios (CRs): 10:1, 20:1, 30:1, 40:1, 50:1, 70:1 and 100:1. A gradient-based active contours segmentation algorithm was employed for segmentation of microcalcifications as depicted on original and compressed mammograms. The performance of the microcalcification segmentation algorithm on original and compressed mammograms was evaluated by means of the area overlap measure (AOM) and distance differentiation metrics (d mean and d max ) by comparing automatically derived microcalcification borders to manually defined ones by an expert radiologist. The AOM monotonically decreased as CR increased, while d mean and d max metrics monotonically increased with CR increase. The performance of the segmentation algorithm on original mammograms was (mean±standard deviation): AOM=0.91±0.08, d mean =0.06±0.05 and d max =0.45±0.20, while on 40:1 compressed images the algorithm's performance was: AOM=0.69±0.15, d mean =0.23±0.13 and d max =0.92±0.39. Mammographic image compression deteriorates the performance of the segmentation algorithm, influencing the quantification of individual microcalcification morphological properties and subsequently affecting computer aided diagnosis of microcalcification clusters. (authors)

  8. The Modified Frequency Algorithm of Digital Watermarking of Still Images Resistant to JPEG Compression

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-01-01

    Full Text Available Digital watermarking is an effective copyright protection for multimedia products (in particular, still images. Digital marking represents process of embedding into object of protection of a digital watermark which is invisible for a human eye. However there is rather large number of the harmful influences capable to destroy the watermark which is embedded into the still image. The most widespread attack is JPEG compression that is caused by efficiency of this format of compression and its big prevalence on the Internet.The new algorithm which is modification of algorithm of Elham is presented in the present article. The algorithm of digital marking of motionless images carries out embedding of a watermark in frequency coefficients of discrete Hadamard transform of the chosen image blocks. The choice of blocks of the image for embedding of a digital watermark is carried out on the basis of the set threshold of entropy of pixels. The choice of low-frequency coefficients for embedding is carried out on the basis of comparison of values of coefficients of discrete cosine transformation with a predetermined threshold, depending on the product of the built-in watermark coefficient on change coefficient.Resistance of new algorithm to compression of JPEG, noising, filtration, change of color, the size and histogram equalization is in details analysed. Research of algorithm consists in comparison of the appearance taken from the damaged image of a watermark with the introduced logo. Ability of algorithm to embedding of a watermark with a minimum level of distortions of the image is in addition analysed. It is established that the new algorithm in comparison by initial algorithm of Elham showed full resistance to compression of JPEG, and also the improved resistance to a noising, change of brightness and histogram equalization.The developed algorithm can be used for copyright protection on the static images. Further studies will be used to study the

  9. Block selective redaction for minimizing loss during de-identification of burned in text in irreversibly compressed JPEG medical images.

    Science.gov (United States)

    Clunie, David A; Gebow, Dan

    2015-01-01

    Deidentification of medical images requires attention to both header information as well as the pixel data itself, in which burned-in text may be present. If the pixel data to be deidentified is stored in a compressed form, traditionally it is decompressed, identifying text is redacted, and if necessary, pixel data are recompressed. Decompression without recompression may result in images of excessive or intractable size. Recompression with an irreversible scheme is undesirable because it may cause additional loss in the diagnostically relevant regions of the images. The irreversible (lossy) JPEG compression scheme works on small blocks of the image independently, hence, redaction can selectively be confined only to those blocks containing identifying text, leaving all other blocks unchanged. An open source implementation of selective redaction and a demonstration of its applicability to multiframe color ultrasound images is described. The process can be applied either to standalone JPEG images or JPEG bit streams encapsulated in other formats, which in the case of medical images, is usually DICOM.

  10. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  11. JPEG2000 and dissemination of cultural heritage over the Internet.

    Science.gov (United States)

    Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos

    2004-03-01

    By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.

  12. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    Science.gov (United States)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  13. Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates

    Science.gov (United States)

    Linares, Irving (Inventor)

    2016-01-01

    Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.

  14. Interband coding extension of the new lossless JPEG standard

    Science.gov (United States)

    Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.

    1997-01-01

    Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.

  15. Irreversible JPEG 2000 compression of abdominal CT for primary interpretation: assessment of visually lossless threshold

    International Nuclear Information System (INIS)

    Lee, Kyoung Ho; Kim, Young Hoon; Kim, Bo Hyoung; Kim, Kil Joong; Kim, Tae Jung; Kim, Hyuk Jung; Hahn, Seokyung

    2007-01-01

    To estimate the visually lossless threshold for Joint Photographic Experts Group (JPEG) 2000 compression of contrast-enhanced abdominal computed tomography (CT) images, 100 images were compressed to four different levels: a reversible (as negative control) and irreversible 5:1, 10:1, and 15:1. By alternately displaying the original and the compressed image on the same monitor, six radiologists independently determined if the compressed image was distinguishable from the original image. For each reader, we compared the proportion of the compressed images being rated distinguishable from the original images between the reversible compression and each of the three irreversible compressions using the exact test for paired proportions. For each reader, the proportion was not significantly different between the reversible (0-1%, 0/100 to 1/100) and irreversible 5:1 compression (0-3%). However, the proportion significantly increased with the irreversible 10:1 (95-99%) and 15:1 compressions (100%) versus reversible compression in all readers (P < 0.001); 100 and 95% of the 5:1 compressed images were rated indistinguishable from the original images by at least five of the six readers and all readers, respectively. Irreversibly 5:1 compressed abdominal CT images are visually lossless and, therefore, potentially acceptable for primary interpretation. (orig.)

  16. Bit Plane Coding based Steganography Technique for JPEG2000 Images and Videos

    Directory of Open Access Journals (Sweden)

    Geeta Kasana

    2016-02-01

    Full Text Available In this paper, a Bit Plane Coding (BPC based steganography technique for JPEG2000 images and Motion JPEG2000 video is proposed. Embedding in this technique is performed in the lowest significant bit planes of the wavelet coefficients of a cover image. In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression rate and are used in Tier-2 process of JPEG2000. In the proposed technique, Tier-1 and Tier-2 processes of JPEG2000 and Motion JPEG2000 are executed twice on the encoder side to collect the information about the lowest bit planes of all code blocks of a cover image, which is utilized in embedding and transmitted to the decoder. After embedding secret data, Optimal Pixel Adjustment Process (OPAP is applied on stego images to enhance its visual quality. Experimental results show that proposed technique provides large embedding capacity and better visual quality of stego images than existing steganography techniques for JPEG2000 compressed images and videos. Extracted secret image is similar to the original secret image.

  17. A threshold-based fixed predictor for JPEG-LS image compression

    Science.gov (United States)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.

  18. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... cannot be expected to code losslessly at a rate of 125 Mbit/s. We investigate the rate and quality effects of quantization using standard JPEG-LS quantization and two new techniques: visual quantization and trellis quantization. Visual quantization is not part of baseline JPEG-LS, but is applicable...... in the framework of JPEG-LS. Visual tests show that this quantization technique gives much better quality than standard JPEG-LS quantization. Trellis quantization is a process by which the original image is altered in such a way as to make lossless JPEG-LS encoding more effective. For JPEG-LS and visual...

  19. DEVELOPING AN IMAGE PROCESSING APPLICATION THAT SUPPORTS NEW FEATURES OF JPEG2000 STANDARD

    Directory of Open Access Journals (Sweden)

    Evgin GÖÇERİ

    2007-03-01

    Full Text Available In recent years, developing technologies in multimedia brought the importance of image processing and compression. Images that are reduced in size using lossless and lossy compression techniques without degrading the quality of the image to an unacceptable level take up much less space in memory. This enables them to be sent and received over the Internet or mobile devices in much shorter time. The wavelet-based image compression standard JPEG2000 has been created by the Joint Photographic Experts Group (JPEG committee to superseding the former JPEG standard. Works on various additions to this standard are still under development. In this study, an Application has been developed in Visual C# 2005 which implies important image processing techniques such as edge detection and noise reduction. The important feature of this Application is to support JPEG2000 standard as well as supporting other image types, and the implementation does not only apply to two-dimensional images, but also to multi-dimensional images. Modern software development platforms that support image processing have also been compared and several features of the developed software have been identified.

  20. JPEG XS, a new standard for visually lossless low-latency lightweight image compression

    Science.gov (United States)

    Descampe, Antonin; Keinert, Joachim; Richter, Thomas; Fößel, Siegfried; Rouvroy, Gaël.

    2017-09-01

    JPEG XS is an upcoming standard from the JPEG Committee (formally known as ISO/IEC SC29 WG1). It aims to provide an interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support of a wide range of implementation technologies such as FPGAs, CPUs and GPUs. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. In addition to the evaluation of the visual transparency of the selected technologies, a detailed analysis of the hardware and software complexity as well as the latency has been done to make sure that the new codec meets the requirements of the above-mentioned use cases. In particular, the end-to-end latency has been constrained to a maximum of 32 lines. Concerning the hardware complexity, neither encoder nor decoder should require more than 50% of an FPGA similar to Xilinx Artix 7 or 25% of an FPGA similar to Altera Cyclon 5. This process resulted in a coding scheme made of an optional color transform, a wavelet transform, the entropy coding of the highest magnitude level of groups of coefficients, and the raw inclusion of the truncated wavelet coefficients. This paper presents the details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results.

  1. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  2. FPGA-based implementation for steganalysis: a JPEG-compatibility algorithm

    Science.gov (United States)

    Gutierrez-Fernandez, E.; Portela-García, M.; Lopez-Ongil, C.; Garcia-Valderas, M.

    2013-05-01

    Steganalysis is a process to detect hidden data in cover documents, like digital images, videos, audio files, etc. This is the inverse process of steganography, which is the used method to hide secret messages. The widely use of computers and network technologies make digital files very easy-to-use means for storing secret data or transmitting secret messages through the Internet. Depending on the cover medium used to embed the data, there are different steganalysis methods. In case of images, many of the steganalysis and steganographic methods are focused on JPEG image formats, since JPEG is one of the most common formats. One of the main important handicaps of steganalysis methods is the processing speed, since it is usually necessary to process huge amount of data or it can be necessary to process the on-going internet traffic in real-time. In this paper, a JPEG steganalysis system is implemented in an FPGA in order to speed-up the detection process with respect to software-based implementations and to increase the throughput. In particular, the implemented method is the JPEG-compatibility detection algorithm that is based on the fact that when a JPEG image is modified, the resulting image is incompatible with the JPEG compression process.

  3. Generalised Category Attack—Improving Histogram-Based Attack on JPEG LSB Embedding

    Science.gov (United States)

    Lee, Kwangsoo; Westfeld, Andreas; Lee, Sangjin

    We present a generalised and improved version of the category attack on LSB steganography in JPEG images with straddled embedding path. It detects more reliably low embedding rates and is also less disturbed by double compressed images. The proposed methods are evaluated on several thousand images. The results are compared to both recent blind and specific attacks for JPEG embedding. The proposed attack permits a more reliable detection, although it is based on first order statistics only. Its simple structure makes it very fast.

  4. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung; Lee, Jong Min [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Kim, Kil Joong [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Department of Radiation Applied Life Science, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 110-799 (Korea, Republic of); Lee, Kyoung Ho [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Institute of Radiation Medicine, Seoul National University Medical Research Center, and Clinical Research Institute, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 110-744 (Korea, Republic of); Kim, Tae Ki [Medical Information Center, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of)

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique was developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.

  5. Low-complexity JPEG-based progressive video codec for wireless video transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Forchhammer, Søren

    2010-01-01

    This paper discusses the question of video codec enhancement for wireless video transmission of high definition video data taking into account constraints on memory and complexity. Starting from parameter adjustment for JPEG2000 compression algorithm used for wireless transmission and achieving...

  6. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  7. Optimal Image Data Compression For Whole Slide Images

    Directory of Open Access Journals (Sweden)

    J. Isola

    2016-06-01

    Differences in WSI file sizes of scanned images deemed “visually lossless” were significant. If we set Hamamatsu Nanozoomer .NDPI file size (using its default “jpeg80 quality” as 100%, the size of a “visually lossless” JPEG2000 file was only 15-20% of that. Comparisons to Aperio and 3D-Histech files (.svs and .mrxs at their default settings yielded similar results. A further optimization of JPEG2000 was done by treating empty slide area as uniform white-grey surface, which could be maximally compressed. Using this algorithm, JPEG2000 file sizes were only half, or even smaller, of original JPEG2000. Variation was due to the proportion of empty slide area on the scan. We anticipate that wavelet-based image compression methods, such as JPEG2000, have a significant advantage in saving storage costs of scanned whole slide image. In routine pathology laboratories applying WSI technology widely to their histology material, absolute cost savings can be substantial.  

  8. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  9. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  10. A novel high-frequency encoding algorithm for image compression

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  11. A new JPEG-based steganographic algorithm for mobile devices

    Science.gov (United States)

    Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.

    2006-05-01

    Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.

  12. High Efficiency EBCOT with Parallel Coding Architecture for JPEG2000

    Directory of Open Access Journals (Sweden)

    Chiang Jen-Shiun

    2006-01-01

    Full Text Available This work presents a parallel context-modeling coding architecture and a matching arithmetic coder (MQ-coder for the embedded block coding (EBCOT unit of the JPEG2000 encoder. Tier-1 of the EBCOT consumes most of the computation time in a JPEG2000 encoding system. The proposed parallel architecture can increase the throughput rate of the context modeling. To match the high throughput rate of the parallel context-modeling architecture, an efficient pipelined architecture for context-based adaptive arithmetic encoder is proposed. This encoder of JPEG2000 can work at 180 MHz to encode one symbol each cycle. Compared with the previous context-modeling architectures, our parallel architectures can improve the throughput rate up to 25%.

  13. Lossless compression of multispectral images using spectral information

    Science.gov (United States)

    Ma, Long; Shi, Zelin; Tang, Xusheng

    2009-10-01

    Multispectral images are available for different purposes due to developments in spectral imaging systems. The sizes of multispectral images are enormous. Thus transmission and storage of these volumes of data require huge time and memory resources. That is why compression algorithms must be developed. A salient property of multispectral images is that strong spectral correlation exists throughout almost all bands. This fact is successfully used to predict each band based on the previous bands. We propose to use spectral linear prediction and entropy coding with context modeling for encoding multispectral images. Linear prediction predicts the value for the next sample and computes the difference between predicted value and the original value. This difference is usually small, so it can be encoded with less its than the original value. The technique implies prediction of each image band by involving number of bands along the image spectra. Each pixel is predicted using information provided by pixels in the previous bands in the same spatial position. As done in the JPEG-LS, the proposed coder also represents the mapped residuals by using an adaptive Golomb-Rice code with context modeling. This residual coding is context adaptive, where the context used for the current sample is identified by a context quantization function of the three gradients. Then, context-dependent Golomb-Rice code and bias parameters are estimated sample by sample. The proposed scheme was compared with three algorithms applied to the lossless compression of multispectral images, namely JPEG-LS, Rice coding, and JPEG2000. Simulation tests performed on AVIRIS images have demonstrated that the proposed compression scheme is suitable for multispectral images.

  14. Performance evaluation of emerging JPEGXR compression standard for medical images

    International Nuclear Information System (INIS)

    Basit, M.A.

    2012-01-01

    Medical images require loss less compression as a small error due to lossy compression may be considered as a diagnostic error. JPEG XR is the latest image compression standard designed for variety of applications and has a support for lossy and loss less modes. This paper provides in-depth performance evaluation of latest JPEGXR with existing image coding standards for medical images using loss less compression. Various medical images are used for evaluation and ten images of each organ are tested. Performance of JPEGXR is compared with JPEG2000 and JPEGLS using mean square error, peak signal to noise ratio, mean absolute error and structural similarity index. JPEGXR shows improvement of 20.73 dB and 5.98 dB over JPEGLS and JPEG2000 respectively for various test images used in experimentation. (author)

  15. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  16. Digital Image Forgery Detection Using JPEG Features and Local Noise Discrepancies

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Wide availability of image processing software makes counterfeiting become an easy and low-cost way to distort or conceal facts. Driven by great needs for valid forensic technique, many methods have been proposed to expose such forgeries. In this paper, we proposed an integrated algorithm which was able to detect two commonly used fraud practices: copy-move and splicing forgery in digital picture. To achieve this target, a special descriptor for each block was created combining the feature from JPEG block artificial grid with that from noise estimation. And forehand image quality assessment procedure reconciled these different features by setting proper weights. Experimental results showed that, compared to existing algorithms, our proposed method is effective on detecting both copy-move and splicing forgery regardless of JPEG compression ratio of the input image.

  17. Parallel efficient rate control methods for JPEG 2000

    Science.gov (United States)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  18. Effect of CT digital image compression on detection of coronary artery calcification

    International Nuclear Information System (INIS)

    Zheng, L.M.; Sone, S.; Itani, Y.; Wang, Q.; Hanamura, K.; Asakura, K.; Li, F.; Yang, Z.G.; Wang, J.C.; Funasaka, T.

    2000-01-01

    Purpose: To test the effect of digital compression of CT images on the detection of small linear or spotted high attenuation lesions such as coronary artery calcification (CAC). Material and methods: Fifty cases with and 50 without CAC were randomly selected from a population that had undergone spiral CT of the thorax for screening lung cancer. CT image data were compressed using JPEG (Joint Photographic Experts Group) or wavelet algorithms at ratios of 10:1, 20:1 or 40:1. Five radiologists reviewed the uncompressed and compressed images on a cathode-ray-tube. Observer performance was evaluated with receiver operating characteristic analysis. Results: CT images compressed at a ratio as high as 20:1 were acceptable for primary diagnosis of CAC. There was no significant difference in the detection accuracy for CAC between JPEG and wavelet algorithms at the compression ratios up to 20:1. CT images were more vulnerable to image blurring on the wavelet compression at relatively lower ratios, and 'blocking' artifacts occurred on the JPEG compression at relatively higher ratios. Conclusion: JPEG and wavelet algorithms allow compression of CT images without compromising their diagnostic value at ratios up to 20:1 in detecting small linear or spotted high attenuation lesions such as CAC, and there was no difference between the two algorithms in diagnostic accuracy

  19. USING H.264/AVC-INTRA FOR DCT BASED SEGMENTATION DRIVEN COMPOUND IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Ebenezer Juliet

    2011-08-01

    Full Text Available This paper presents a one pass block classification algorithm for efficient coding of compound images which consists of multimedia elements like text, graphics and natural images. The objective is to minimize the loss of visual quality of text during compression by separating text information which needs high special resolution than the pictures and background. It segments computer screen images into text/graphics and picture/background classes based on DCT energy in each 4x4 block, and then compresses both text/graphics pixels and picture/background blocks by H.264/AVC with variable quantization parameter. Experimental results show that the single H.264/AVC-INTRA coder with variable quantization outperforms single coders such as JPEG, JPEG-2000 for compound images. Also the proposed method improves the PSNR value significantly than standard JPEG, JPEG-2000 and while keeping competitive compression ratios.

  20. The JPEG XT suite of standards: status and future plans

    Science.gov (United States)

    Richter, Thomas; Bruylants, Tim; Schelkens, Peter; Ebrahimi, Touradj

    2015-09-01

    The JPEG standard has known an enormous market adoption. Daily, billions of pictures are created, stored and exchanged in this format. The JPEG committee acknowledges this success and spends continued efforts in maintaining and expanding the standard specifications. JPEG XT is a standardization effort targeting the extension of the JPEG features by enabling support for high dynamic range imaging, lossless and near-lossless coding, and alpha channel coding, while also guaranteeing backward and forward compatibility with the JPEG legacy format. This paper gives an overview of the current status of the JPEG XT standards suite. It discusses the JPEG legacy specification, and details how higher dynamic range support is facilitated both for integer and floating-point color representations. The paper shows how JPEG XT's support for lossless and near-lossless coding of low and high dynamic range images is achieved in combination with backward compatibility to JPEG legacy. In addition, the extensible boxed-based JPEG XT file format on which all following and future extensions of JPEG will be based is introduced. This paper also details how the lossy and lossless representations of alpha channels are supported to allow coding transparency information and arbitrarily shaped images. Finally, we conclude by giving prospects on upcoming JPEG standardization initiative JPEG Privacy & Security, and a number of other possible extensions in JPEG XT.

  1. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, Christopher M. [Los Alamos National Laboratory

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementation techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.

  2. A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard

    Science.gov (United States)

    Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid

    2005-07-01

    The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.

  3. Effect of high image compression on the reproducibility of cardiac Sestamibi reporting

    International Nuclear Information System (INIS)

    Thomas, P.; Allen, L.; Beuzeville, S.

    1999-01-01

    Full text: Compression algorithms have been mooted to minimize storage space and transmission times of digital images. We assessed the impact of high-level lousy compression using JPEG and wavelet algorithms on image quality and reporting accuracy of cardiac Sestamibi studies. Twenty stress/rest Sestamibi cardiac perfusion studies were reconstructed into horizontal short, vertical long and horizontal long axis slices using conventional methods. Each of these six sets of slices were aligned for reporting and saved (uncompressed) as a bitmap. This bitmap was then compressed using JPEG compression, then decompressed and saved as a bitmap for later viewing. This process was repeated using the original bitmap and wavelet compression. Finally, a second copy of the original bitmap was made. All 80 bitmaps were randomly coded to ensure blind reporting. The bitmaps were read blinded and by consensus of 2 experienced nuclear medicine physicians using a 5-point scale and 25 cardiac segments. Subjective image quality was also reported using a 3-point scale. Samples of the compressed images were also subtracted from the original bitmap for visual comparison of differences. Results showed an average compression ratio of 23:1 for wavelet and 13:1 for JPEG. Image subtraction showed only very minor discordance between the original and compressed images. There was no significant difference in subjective quality between the compressed and uncompressed images. There was no significant difference in reporting reproducibility of the identical bitmap copy, the JPEG image and the wavelet image compared with the original bitmap. Use of the high compression algorithms described had no significant impact on reporting reproducibility and subjective image quality of cardiac Sestamibi perfusion studies

  4. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  5. Performance of JPEG Image Transmission Using Proposed Asymmetric Turbo Code

    Directory of Open Access Journals (Sweden)

    Siddiqi Mohammad Umar

    2007-01-01

    Full Text Available This paper gives the results of a simulation study on the performance of JPEG image transmission over AWGN and Rayleigh fading channels using typical and proposed asymmetric turbo codes for error control coding. The baseline JPEG algorithm is used to compress a QCIF ( "Suzie" image. The recursive systematic convolutional (RSC encoder with generator polynomials , that is, (13/11 in decimal, and 3G interleaver are used for the typical WCDMA and CDMA2000 turbo codes. The proposed asymmetric turbo code uses generator polynomials , that is, (13/11; 13/9 in decimal, and a code-matched interleaver. The effect of interleaver in the proposed asymmetric turbo code is studied using weight distribution and simulation. The simulation results and performance bound for proposed asymmetric turbo code for the frame length , code rate with Log-MAP decoder over AWGN channel are compared with the typical system. From the simulation results, it is observed that the image transmission using proposed asymmetric turbo code performs better than that with the typical system.

  6. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  7. Quantization Distortion in Block Transform-Compressed Data

    Science.gov (United States)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  8. High bit depth infrared image compression via low bit depth codecs

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    .264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can...

  9. Efficient transmission of compressed data for remote volume visualization.

    Science.gov (United States)

    Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S

    2006-09-01

    One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.

  10. A Posteriori Restoration of Block Transform-Compressed Data

    Science.gov (United States)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  11. Application of M-JPEG compression hardware to dynamic stimulus production.

    Science.gov (United States)

    Mulligan, J B

    1997-01-01

    Inexpensive circuit boards have appeared on the market which transform a normal micro-computer's disk drive into a video disk capable of playing extended video sequences in real time. This technology enables the performance of experiments which were previously impossible, or at least prohibitively expensive. The new technology achieves this capability using special-purpose hardware to compress and decompress individual video frames, enabling a video stream to be transferred over relatively low-bandwidth disk interfaces. This paper will describe the use of such devices for visual psychophysics and present the technical issues that must be considered when evaluating individual products.

  12. High bit depth infrared image compression via low bit depth codecs

    Science.gov (United States)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  13. Wavelets: Applications to Image Compression-II

    Indian Academy of Sciences (India)

    Wavelets: Applications to Image Compression-II. Sachin P ... successful application of wavelets in image com- ... b) Soft threshold: In this case, all the coefficients x ..... [8] http://www.jpeg.org} Official site of the Joint Photographic Experts Group.

  14. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  15. Toward privacy-preserving JPEG image retrieval

    Science.gov (United States)

    Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping

    2017-07-01

    This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.

  16. High bit depth infrared image compression via low bit depth codecs

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-01-01

    images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed.......264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can...

  17. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    Science.gov (United States)

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  18. FBCOT: a fast block coding option for JPEG 2000

    Science.gov (United States)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically Coding with Optimized Truncation).

  19. MEDICAL IMAGE COMPRESSION USING HYBRID CODER WITH FUZZY EDGE DETECTION

    Directory of Open Access Journals (Sweden)

    K. Vidhya

    2011-02-01

    Full Text Available Medical imaging techniques produce prohibitive amounts of digitized clinical data. Compression of medical images is a must due to large memory space required for transmission and storage. This paper presents an effective algorithm to compress and to reconstruct medical images. The proposed algorithm first extracts edge information of medical images by using fuzzy edge detector. The images are decomposed using Cohen-Daubechies-Feauveau (CDF wavelet. The hybrid technique utilizes the efficient wavelet based compression algorithms such as JPEG2000 and Set Partitioning In Hierarchical Trees (SPIHT. The wavelet coefficients in the approximation sub band are encoded using tier 1 part of JPEG2000. The wavelet coefficients in the detailed sub bands are encoded using SPIHT. Consistent quality images are produced by this method at a lower bit rate compared to other standard compression algorithms. Two main approaches to assess image quality are objective testing and subjective testing. The image quality is evaluated by objective quality measures. Objective measures correlate well with the perceived image quality for the proposed compression algorithm.

  20. Multiband CCD Image Compression for Space Camera with Large Field of View

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Space multiband CCD camera compression encoder requires low-complexity, high-robustness, and high-performance because of its captured images information being very precious and also because it is usually working on the satellite where the resources, such as power, memory, and processing capacity, are limited. However, the traditional compression approaches, such as JPEG2000, 3D transforms, and PCA, have the high-complexity. The Consultative Committee for Space Data Systems-Image Data Compression (CCSDS-IDC algorithm decreases the average PSNR by 2 dB compared with JPEG2000. In this paper, we proposed a low-complexity compression algorithm based on deep coupling algorithm among posttransform in wavelet domain, compressive sensing, and distributed source coding. In our algorithm, we integrate three low-complexity and high-performance approaches in a deeply coupled manner to remove the spatial redundant, spectral redundant, and bit information redundancy. Experimental results on multiband CCD images show that the proposed algorithm significantly outperforms the traditional approaches.

  1. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  2. An analytical look at the effects of compression on medical images

    OpenAIRE

    Persons, Kenneth; Palisson, Patrice; Manduca, Armando; Erickson, Bradley J.; Savcenko, Vladimir

    1997-01-01

    This article will take an analytical look at how lossy Joint Photographic Experts Group (JPEG) and wavelet image compression techniques affect medical image content. It begins with a brief explanation of how the JPEG and wavelet algorithms work, and describes in general terms what effect they can have on image quality (removal of noise, blurring, and artifacts). It then focuses more specifically on medical image diagnostic content and explains why subtle pathologies, that may be difficult for...

  3. Switching theory-based steganographic system for JPEG images

    Science.gov (United States)

    Cherukuri, Ravindranath C.; Agaian, Sos S.

    2007-04-01

    Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.

  4. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    and subjective results on JPEG compressed images, as well as MJPEG and H.264/AVC compressed video, indicate that the proposed algorithms employing directional and spatial fuzzy filters achieve better artifact reduction than other methods. In particular, robust improvements with H.264/AVC video have been gained...

  5. Region of interest and windowing-based progressive medical image delivery using JPEG2000

    Science.gov (United States)

    Nagaraj, Nithin; Mukhopadhyay, Sudipta; Wheeler, Frederick W.; Avila, Ricardo S.

    2003-05-01

    An important telemedicine application is the perusal of CT scans (digital format) from a central server housed in a healthcare enterprise across a bandwidth constrained network by radiologists situated at remote locations for medical diagnostic purposes. It is generally expected that a viewing station respond to an image request by displaying the image within 1-2 seconds. Owing to limited bandwidth, it may not be possible to deliver the complete image in such a short period of time with traditional techniques. In this paper, we investigate progressive image delivery solutions by using JPEG 2000. An estimate of the time taken in different network bandwidths is performed to compare their relative merits. We further make use of the fact that most medical images are 12-16 bits, but would ultimately be converted to an 8-bit image via windowing for display on the monitor. We propose a windowing progressive RoI technique to exploit this and investigate JPEG 2000 RoI based compression after applying a favorite or a default window setting on the original image. Subsequent requests for different RoIs and window settings would then be processed at the server. For the windowing progressive RoI mode, we report a 50% reduction in transmission time.

  6. Quality Evaluation and Nonuniform Compression of Geometrically Distorted Images Using the Quadtree Distortion Map

    Directory of Open Access Journals (Sweden)

    Cristina Costa

    2004-09-01

    Full Text Available The paper presents an analysis of the effects of lossy compression algorithms applied to images affected by geometrical distortion. It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel. A distortion measure named quadtree distortion map (QDM able to quantify this aspect is proposed. Furthermore, QDM is exploited to achieve adaptive compression of geometrically distorted pictures, in order to ensure a uniform quality on the final image. Tests are performed using JPEG and JPEG2000 coding standards in order to quantitatively and qualitatively assess the performance of the proposed method.

  7. An efficient multiple exposure image fusion in JPEG domain

    Science.gov (United States)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  8. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    Science.gov (United States)

    2015-12-24

    manufacturing today (namely, the 14nm FinFET silicon CMOS technology). The JPEG algorithm is selected as a motivational example since it is widely...TIFF images of a U.S. Air Force F-16 aircraft provided by the University of Southern California Signal and Image Processing Institute (SIPI) image...silicon CMOS technology currently in high volume manufac- turing today (the 14 nm FinFET silicon CMOS technology). The main contribution of this

  9. About a method for compressing x-ray computed microtomography data

    Science.gov (United States)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  10. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    Science.gov (United States)

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  11. Development and assessment of compression technique for medical images using neural network. I. Assessment of lossless compression

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi

    2007-01-01

    This paper describes assessment of the lossless compression of a new efficient compression technique (JIS system) using neural network that the author and co-workers have recently developed. At first, theory is explained for encoding and decoding the data. Assessment is done on 55 images each of chest digital roentgenography, digital mammography, 64-row multi-slice CT, 1.5 Tesla MRI, positron emission tomography (PET) and digital subtraction angiography, which are lossless-compressed by the present JIS system to see the compression rate and loss. For comparison, those data are also JPEG lossless-compressed. Personal computer (PC) is an Apple MacBook Pro with configuration of Boot Camp for Windows environment. The present JIS system is found to have a more than 4 times higher efficiency than the usual compressions which compressing the file volume to only 1/11 in average, and thus to be importantly responsible to the increasing medical imaging data. (R.T.)

  12. Edge-Based Image Compression with Homogeneous Diffusion

    Science.gov (United States)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  13. A Comparative Study of Compression Methods and the Development of CODEC Program of Biological Signal for Emergency Telemedicine Service

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, T.S.; Kim, J.S. [Changwon National University, Changwon (Korea); Lim, Y.H. [Visionite Co., Ltd., Seoul (Korea); Yoo, S.K. [Yonsei University, Seoul (Korea)

    2003-05-01

    In an emergency telemedicine system such as the High-quality Multimedia based Real-time Emergency Telemedicine(HMRET) service, it is very important to examine the status of the patient continuously using the multimedia data including the biological signals(ECG, BP, Respiration, S{sub p}O{sub 2}) of the patient. In order to transmit these data real time through the communication means which have the limited transmission capacity, it is also necessary to compress the biological data besides other multimedia data. For this purpose, we investigate and compare the ECG compression techniques in the time domain and in the wavelet transform domain, and present an effective lossless compression method of the biological signals using JPEG Huffman table for an emergency telemedicine system. And, for the HMRET service, we developed the lossless compression and reconstruction program of the biological signals in MSVC++ 6.0 using DPCM method and JPEG Huffman table, and tested in an internet environment. (author). 15 refs., 17 figs., 7 tabs.

  14. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2012-01-01

    Full Text Available An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  15. Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive Fountain Codes

    OpenAIRE

    Chen, Zhao; Xu, Mai; Yin, Luiguo; Lu, Jianhua

    2012-01-01

    This paper proposes a novel scheme, based on progressive fountain codes, for broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive resolution levels of images/video have been unequally protected when transmitted using the proposed progressive fountain codes. With progressive fountain codes applied in the broadcast scheme, the resolutions of images (JPEG 2000) or videos (MJPEG 2000) received by different users can be automatically adaptive to their channel qualities, i.e. ...

  16. LDPC-based iterative joint source-channel decoding for JPEG2000.

    Science.gov (United States)

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  17. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  18. Compression of digital images in radiology. Results of a consensus conference; Kompression digitaler Bilddaten in der Radiologie. Ergebnisse einer Konsensuskonferenz

    Energy Technology Data Exchange (ETDEWEB)

    Loose, R. [Klinikum Nuernberg-Nord (Germany). Inst. fuer Diagnostische und Interventionelle Radiologie; Braunschweig, R. [BG Kliniken Bergmannstrost, Halle/Saale (Germany). Klinik fuer Bildgebende Diagnostik und Interventionsradiologie; Kotter, E. [Universitaetsklinikum Freiburg (Germany). Abt. Roentgendiagnostik; Mildenberger, P. [Mainz Univ. (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Simmler, R.; Wucherer, M. [Klinikum Nuernberg (Germany). Inst. fuer Medizinische Physik

    2009-01-15

    Purpose: Recommendations for lossy compression of digital radiological DICOM images in Germany by means of a consensus conference. The compression of digital radiological images was evaluated in many studies. Even though the results demonstrate full diagnostic image quality of modality-dependent compression between 1:5 and 1:200, there are only a few clinical applications. Materials and Methods: A consensus conference with approx. 80 interested participants (radiology, industry, physics, and agencies) without individual invitation was organized by the working groups AGIT and APT of the German Roentgen Society DRG to determine compression factors without loss of diagnostic image quality for different anatomical regions for CT, CR/DR, MR, RF/XA examinations. The consent level was specified as at least 66 %. Results: For individual modalities the following compression factors were recommended: CT (brain) 1:5, CT (all other applications) 1:8, CR/DR (all applications except mammography) 1:10, CR/DR (mammography) 1:15, MR (all applications) 1:7, RF/XA (fluoroscopy, DSA, cardiac angio) 1:6. The recommended compression ratios are valid for JPEG and JPEG 2000 /Wavelet compressions. Conclusion: The results may be understood as recommendations and indicate limits of compression factors with no expected reduction of diagnostic image quality. They are similar to the current national recommendations for Canada and England. (orig.)

  19. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    Science.gov (United States)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  20. Utilizing Cross-Layer Information to Improve Performance in JPEG2000 Decoding

    Directory of Open Access Journals (Sweden)

    Hannes Persson

    2007-01-01

    Full Text Available We focus on wireless multimedia communication and investigate how cross-layer information can be used to improve performance at the application layer, using JPEG2000 as an example. The cross-layer information is in the form of soft information from the physical layer. The soft information, which is supplied by a soft decision demodulator, yields reliability measures for the received bits and is fed into two soft input iterative JPEG2000 image decoders. When errors are detected with the error detecting mechanisms in JPEG2000, the decoders utilize the soft information to point out likely transmission errors. Hence, the decoders can correct errors and increase the image quality without making time-consuming retransmissions. We believe that the proposed decoding method utilizing soft information is suitable for a general IP-based network and that it keeps the principles of a layered structure of the protocol stack intact. Further, experimental results with images transmitted over a simulated wireless channel show that a simple decoding algorithm that utilizes soft information can give high gains in image quality compared to the standard hard-decision decoding.

  1. Fragmentation Point Detection of JPEG Images at DHT Using Validator

    Science.gov (United States)

    Mohamad, Kamaruddin Malik; Deris, Mustafa Mat

    File carving is an important, practical technique for data recovery in digital forensics investigation and is particularly useful when filesystem metadata is unavailable or damaged. The research on reassembly of JPEG files with RST markers, fragmented within the scan area have been done before. However, fragmentation within Define Huffman Table (DHT) segment is yet to be resolved. This paper analyzes the fragmentation within the DHT area and list out all the fragmentation possibilities. Two main contributions are made in this paper. Firstly, three fragmentation points within DHT area are listed. Secondly, few novel validators are proposed to detect these fragmentations. The result obtained from tests done on manually fragmented JPEG files, showed that all three fragmentation points within DHT are successfully detected using validators.

  2. Effects of Different Compression Techniques on Diagnostic Accuracies of Breast Masses on Digitized Mammograms

    International Nuclear Information System (INIS)

    Zhigang Liang; Xiangying Du; Jiabin Liu; Yanhui Yang; Dongdong Rong; Xinyu Y ao; Kuncheng Li

    2008-01-01

    Background: The JPEG 2000 compression technique has recently been introduced into the medical imaging field. It is critical to understand the effects of this technique on the detection of breast masses on digitized images by human observers. Purpose: To evaluate whether lossless and lossy techniques affect the diagnostic results of malignant and benign breast masses on digitized mammograms. Material and Methods: A total of 90 screen-film mammograms including craniocaudal and lateral views obtained from 45 patients were selected by two non-observing radiologists. Of these, 22 cases were benign lesions and 23 cases were malignant. The mammographic films were digitized by a laser film digitizer, and compressed to three levels (lossless and lossy 20:1 and 40:1) using the JPEG 2000 wavelet-based image compression algorithm. Four radiologists with 10-12 years' experience in mammography interpreted the original and compressed images. The time interval was 3 weeks for each reading session. A five-point malignancy scale was used, with a score of 1 corresponding to definitely not a malignant mass, a score of 2 referring to not a malignant mass, a score of 3 meaning possibly a malignant mass, a score of 4 being probably a malignant mass, and a score of 5 interpreted as definitely a malignant mass. The radiologists' performance was evaluated using receiver operating characteristic analysis. Results: The average Az values for all radiologists decreased from 0.8933 for the original uncompressed images to 0.8299 for the images compressed at 40:1. This difference was not statistically significant. The detection accuracy of the original images was better than that of the compressed images, and the Az values decreased with increasing compression ratio. Conclusion: Digitized mammograms compressed at 40:1 could be used to substitute original images in the diagnosis of breast cancer

  3. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    Science.gov (United States)

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  4. Regional variance of visually lossless threshold in compressed chest CT images: Lung versus mediastinum and chest wall

    International Nuclear Information System (INIS)

    Kim, Tae Jung; Lee, Kyoung Ho; Kim, Bohyoung; Kim, Kil Joong; Chun, Eun Ju; Bajpai, Vasundhara; Kim, Young Hoon; Hahn, Seokyung; Lee, Kyung Won

    2009-01-01

    Objective: To estimate the visually lossless threshold (VLT) for the Joint Photographic Experts Group (JPEG) 2000 compression of chest CT images and to demonstrate the variance of the VLT between the lung and mediastinum/chest wall. Subjects and methods: Eighty images were compressed reversibly (as negative control) and irreversibly to 5:1, 10:1, 15:1 and 20:1. Five radiologists determined if the compressed images were distinguishable from their originals in the lung and mediastinum/chest wall. Exact tests for paired proportions were used to compare the readers' responses between the reversible and irreversible compressions and between the lung and mediastinum/chest wall. Results: At reversible, 5:1, 10:1, 15:1, and 20:1 compressions, 0%, 0%, 3-49% (p < .004, for three readers), 69-99% (p < .001, for all readers), and 100% of the 80 image pairs were distinguishable in the lung, respectively; and 0%, 0%, 74-100% (p < .001, for all readers), 100%, and 100% were distinguishable in the mediastinum/chest wall, respectively. The image pairs were less frequently distinguishable in the lung than in the mediastinum/chest wall at 10:1 (p < .001, for all readers) and 15:1 (p < .001, for two readers). In 321 image comparisons, the image pairs were indistinguishable in the lung but distinguishable in the mediastinum/chest wall, whereas there was no instance of the opposite. Conclusion: For JPEG2000 compression of chest CT images, the VLT is between 5:1 and 10:1. The lung is more tolerant to the compression than the mediastinum/chest wall.

  5. Regional variance of visually lossless threshold in compressed chest CT images: Lung versus mediastinum and chest wall

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Jung [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of); Lee, Kyoung Ho [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of)], E-mail: kholee@snubhrad.snu.ac.kr; Kim, Bohyoung; Kim, Kil Joong; Chun, Eun Ju; Bajpai, Vasundhara; Kim, Young Hoon [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of); Hahn, Seokyung [Medical Research Collaborating Center, Seoul National University Hospital, 28 Yongon-dong, Chongno-gu, Seoul 110-744 (Korea, Republic of); Seoul National University College of Medicine (Korea, Republic of); Lee, Kyung Won [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of)

    2009-03-15

    Objective: To estimate the visually lossless threshold (VLT) for the Joint Photographic Experts Group (JPEG) 2000 compression of chest CT images and to demonstrate the variance of the VLT between the lung and mediastinum/chest wall. Subjects and methods: Eighty images were compressed reversibly (as negative control) and irreversibly to 5:1, 10:1, 15:1 and 20:1. Five radiologists determined if the compressed images were distinguishable from their originals in the lung and mediastinum/chest wall. Exact tests for paired proportions were used to compare the readers' responses between the reversible and irreversible compressions and between the lung and mediastinum/chest wall. Results: At reversible, 5:1, 10:1, 15:1, and 20:1 compressions, 0%, 0%, 3-49% (p < .004, for three readers), 69-99% (p < .001, for all readers), and 100% of the 80 image pairs were distinguishable in the lung, respectively; and 0%, 0%, 74-100% (p < .001, for all readers), 100%, and 100% were distinguishable in the mediastinum/chest wall, respectively. The image pairs were less frequently distinguishable in the lung than in the mediastinum/chest wall at 10:1 (p < .001, for all readers) and 15:1 (p < .001, for two readers). In 321 image comparisons, the image pairs were indistinguishable in the lung but distinguishable in the mediastinum/chest wall, whereas there was no instance of the opposite. Conclusion: For JPEG2000 compression of chest CT images, the VLT is between 5:1 and 10:1. The lung is more tolerant to the compression than the mediastinum/chest wall.

  6. Hyperspectral Imagery Throughput and Fusion Evaluation over Compression and Interpolation

    Science.gov (United States)

    2008-07-01

    MSE ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ (17) The PSNR values and compression ratios are shown in Table 1 and a plot of PSNR against the bits per pixel ( bpp ) is shown...Ratio bpp 59.3 2.9:1 2.76 46.0 9.2:1 0.87 43.2 14.5:1 0.55 40.8 25.0:1 0.32 38.7 34.6:1 0.23 35.5 62.1:1 0.13 Figure 11. PSNR vs. bits per...and a plot of PSNR against the bits per pixel ( bpp ) is shown in Figure 13. The 3D DCT compression yielded better results than the baseline JPEG

  7. Application of reversible denoising and lifting steps with step skipping to color space transforms for improved lossless compression

    Science.gov (United States)

    Starosolski, Roman

    2016-07-01

    Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.

  8. Image acquisition system using on sensor compressed sampling technique

    Science.gov (United States)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  9. An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF

    Directory of Open Access Journals (Sweden)

    Shikang Kong

    2017-02-01

    Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.

  10. Artifact reduction of compressed images and video combining adaptive fuzzy filtering and directional anisotropic diffusion

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Forchhammer, Søren; Korhonen, Jari

    2011-01-01

    and ringing artifacts, we have applied directional anisotropic diffusion. Besides that, the selection of the adaptive threshold parameter for the diffusion coefficient has also improved the performance of the algorithm. Experimental results on JPEG compressed images as well as MJPEG and H.264 compressed......Fuzzy filtering is one of the recently developed methods for reducing distortion in compressed images and video. In this paper, we combine the powerful anisotropic diffusion equations with fuzzy filtering in order to reduce the impact of artifacts. Based on the directional nature of the blocking...... videos show improvement in artifact reduction of the proposed algorithm over other directional and spatial fuzzy filters....

  11. Image Compression using Haar and Modified Haar Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Mohannad Abid Shehab Ahmed

    2013-04-01

    Full Text Available Efficient image compression approaches can provide the best solutions to the recent growth of the data intensive and multimedia based applications. As presented in many papers the Haar matrix–based methods and wavelet analysis can be used in various areas of image processing such as edge detection, preserving, smoothing or filtering. In this paper, color image compression analysis and synthesis based on Haar and modified Haar is presented. The standard Haar wavelet transformation with N=2 is composed of a sequence of low-pass and high-pass filters, known as a filter bank, the vertical and horizontal Haar filters are composed to construct four 2-dimensional filters, such filters applied directly to the image to speed up the implementation of the Haar wavelet transform. Modified Haar technique is studied and implemented for odd based numbers i.e. (N=3 & N=5 to generate many solution sets, these sets are tested using the energy function or numerical method to get the optimum one.The Haar transform is simple, efficient in memory usage due to high zero value spread (it can use sparse principle, and exactly reversible without the edge effects as compared to DCT (Discrete Cosine Transform. The implemented Matlab simulation results prove the effectiveness of DWT (Discrete Wave Transform algorithms based on Haar and Modified Haar techniques in attaining an efficient compression ratio (C.R, achieving higher peak signal to noise ratio (PSNR, and the resulting images are of much smoother as compared to standard JPEG especially for high C.R. A comparison between standard JPEG, Haar, and Modified Haar techniques is done finally which approves the highest capability of Modified Haar between others.

  12. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  13. Evaluation of Algorithms for Compressing Hyperspectral Data

    Science.gov (United States)

    Cook, Sid; Harsanyi, Joseph; Faber, Vance

    2003-01-01

    With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.

  14. Iris Recognition: The Consequences of Image Compression

    Directory of Open Access Journals (Sweden)

    Bishop DanielA

    2010-01-01

    Full Text Available Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  15. Iris Recognition: The Consequences of Image Compression

    Science.gov (United States)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  16. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  17. The effects of lossy compression on diagnostically relevant seizure information in EEG signals.

    Science.gov (United States)

    Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E

    2013-01-01

    This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.

  18. Effect of the rate of chest compression familiarised in previous training on the depth of chest compression during metronome-guided cardiopulmonary resuscitation: a randomised crossover trial.

    Science.gov (United States)

    Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo

    2016-02-12

    To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Participants were recruited from a medical school and two paramedic schools of South Korea. 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Average compression depths were significantly different according to the rate used in training (ptraining at a speed of 100 compressions/min and those at speeds of 120 and 140 compressions/min (both pCPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    Energy Technology Data Exchange (ETDEWEB)

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  20. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    Science.gov (United States)

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  1. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT

    Directory of Open Access Journals (Sweden)

    Ran Li

    2018-04-01

    Full Text Available Aimed at a low-energy consumption of Green Internet of Things (IoT, this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  2. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  3. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  4. HVS scheme for DICOM image compression: Design and comparative performance evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Prabhakar, B. [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)]. E-mail: prabhakarb@iitm.ac.in; Reddy, M. Ramasubba [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)

    2007-07-15

    Advanced digital imaging technology in medical domain demands efficient and effective DICOM image compression for progressive image transmission and picture archival. Here a compression system, which incorporates sensitivities of HVS coded with SPIHT quantization, is discussed. The weighting factors derived from luminance CSF are used to transform the wavelet subband coefficients to reflect characteristics of HVS in best possible manner. Mannos et al. and Daly HVS models have been used and results are compared. To evaluate the performance, Eskicioglu chart metric is considered. Experiment is done on both Monochrome and Color Dicom images of MRI, CT, OT, and CR, natural and benchmark images. Reconstructed image through our technique showed improvement in visual quality and Eskicioglu chart metric at same compression ratios. Also the Daly HVS model based compression shows better performance perceptually and quantitatively when compared to Mannos et el. model. Further 'bior4.4' wavelet filter provides better results than 'db9' filter for this compression system. Results give strong evidence that under common boundary conditions; our technique achieves competitive visual quality, compression ratio and coding/decoding time, when compared with jpeg2000 (kakadu)

  5. Spectral Distortion in Lossy Compression of Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Bruno Aiazzi

    2012-01-01

    Full Text Available Distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is investigated, with the aim of minimizing the spectral distortion between original and decompressed data. The absolute angular error, or spectral angle mapper (SAM, is used to quantify spectral distortion, while radiometric distortions are measured by maximum absolute deviation (MAD for near-lossless methods, for example, differential pulse code modulation (DPCM, or mean-squared error (MSE for lossy methods, for example, spectral decorrelation followed by JPEG 2000. Two strategies of interband distortion allocation are compared: given a target average bit rate, distortion may be set to be constant with wavelength. Otherwise, it may be allocated proportionally to the noise level of each band, according to the virtually lossless protocol. Comparisons with the uncompressed originals show that the average SAM of radiance spectra is minimized by constant distortion allocation to radiance data. However, variable distortion allocation according to the virtually lossless protocol yields significantly lower SAM in case of reflectance spectra obtained from compressed radiance data, if compared with the constant distortion allocation at the same compression ratio.

  6. SINGLE VERSUS MULTIPLE TRIAL VECTORS IN CLASSICAL DIFFERENTIAL EVOLUTION FOR OPTIMIZING THE QUANTIZATION TABLE IN JPEG BASELINE ALGORITHM

    Directory of Open Access Journals (Sweden)

    B Vinoth Kumar

    2017-07-01

    Full Text Available Quantization Table is responsible for compression / quality trade-off in baseline Joint Photographic Experts Group (JPEG algorithm and therefore it is viewed as an optimization problem. In the literature, it has been found that Classical Differential Evolution (CDE is a promising algorithm to generate the optimal quantization table. However, the searching capability of CDE could be limited due to generation of single trial vector in an iteration which in turn reduces the convergence speed. This paper studies the performance of CDE by employing multiple trial vectors in a single iteration. An extensive performance analysis has been made between CDE and CDE with multiple trial vectors in terms of Optimization process, accuracy, convergence speed and reliability. The analysis report reveals that CDE with multiple trial vectors improves the convergence speed of CDE and the same is confirmed using a statistical hypothesis test (t-test.

  7. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  8. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  9. Feature selection, statistical modeling and its applications to universal JPEG steganalyzer

    Energy Technology Data Exchange (ETDEWEB)

    Jalan, Jaikishan [Iowa State Univ., Ames, IA (United States)

    2009-01-01

    Steganalysis deals with identifying the instances of medium(s) which carry a message for communication by concealing their exisitence. This research focuses on steganalysis of JPEG images, because of its ubiquitous nature and low bandwidth requirement for storage and transmission. JPEG image steganalysis is generally addressed by representing an image with lower-dimensional features such as statistical properties, and then training a classifier on the feature set to differentiate between an innocent and stego image. Our approach is two fold: first, we propose a new feature reduction technique by applying Mahalanobis distance to rank the features for steganalysis. Many successful steganalysis algorithms use a large number of features relative to the size of the training set and suffer from a ”curse of dimensionality”: large number of feature values relative to training data size. We apply this technique to state-of-the-art steganalyzer proposed by Tom´as Pevn´y (54) to understand the feature space complexity and effectiveness of features for steganalysis. We show that using our approach, reduced-feature steganalyzers can be obtained that perform as well as the original steganalyzer. Based on our experimental observation, we then propose a new modeling technique for steganalysis by developing a Partially Ordered Markov Model (POMM) (23) to JPEG images and use its properties to train a Support Vector Machine. POMM generalizes the concept of local neighborhood directionality by using a partial order underlying the pixel locations. We show that the proposed steganalyzer outperforms a state-of-the-art steganalyzer by testing our approach with many different image databases, having a total of 20000 images. Finally, we provide a software package with a Graphical User Interface that has been developed to make this research accessible to local state forensic departments.

  10. Architecture for dynamically reconfigurable real-time lossless compression

    Science.gov (United States)

    Carter, Alison J.; Audsley, Neil C.

    2004-05-01

    Image compression is a computationally intensive task, which can be undertaken most efficiently by dedicated hardware. If a portable device is to carry out real-time compression on a variety of image types, then it may be useful to reconfigure the circuitry dynamically. Using commercial off-the shelf (COTS) chips, reconfiguration is usually implemented by a complete re-load from memory, but it is also possible to perform a partial reconfiguration. This work studies the use of programmable hardware devices to implement the lossless JPEG compression algorithm in real-time on a stream of independent image frames. The data rate is faster than can be compressed serially in hardware by a single processor, so the operation is split amongst several processors. These are implemented as programmable circuits, together with necessary buffering of input and output data. The timing of input and output, bearing in mind the different, and context-dependent amounts of data due to Huffman coding, is analyzed using storage-timing graphs. Because there may be differing parameters from one frame to the next, several different configurations are prepared and stored, ready to load as required. The scheduling of these reconfigurations, and the distribution/recombination of data streams is studied, giving an analysis of the real-time performance.

  11. View compensated compression of volume rendered images for remote visualization.

    Science.gov (United States)

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  12. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    J. Soraghan

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by “blowing out” the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  13. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    Salleh MFM

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  14. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  15. Research on lossless compression of true color RGB image with low time and space complexity

    Science.gov (United States)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  16. Fast Bayesian JPEG Decompression and Denoising With Tight Frame Priors

    Czech Academy of Sciences Publication Activity Database

    Šorel, Michal; Bartoš, Michal

    2017-01-01

    Roč. 26, č. 1 (2017), s. 490-501 ISSN 1057-7149 R&D Projects: GA ČR(CZ) GA16-13830S Institutional support: RVO:67985556 Keywords : image processing * image restoration * JPEG Subject RIV: JD - Computer Applications, Robotics OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 4.828, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/sorel-0471741. pdf

  17. Design and Implementation of an Embedded NIOS II System for JPEG2000 Tier II Encoding

    Directory of Open Access Journals (Sweden)

    John M. McNichols

    2013-01-01

    Full Text Available This paper presents a novel implementation of the JPEG2000 standard as a system on a chip (SoC. While most of the research in this field centers on acceleration of the EBCOT Tier I encoder, this work focuses on an embedded solution for EBCOT Tier II. Specifically, this paper proposes using an embedded softcore processor to perform Tier II processing as the back end of an encoding pipeline. The Altera NIOS II processor is chosen for the implementation and is coupled with existing embedded processing modules to realize a fully embedded JPEG2000 encoder. The design is synthesized on a Stratix IV FPGA and is shown to out perform other comparable SoC implementations by 39% in computation time.

  18. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  19. Parity Bit Replenishment for JPEG 2000-Based Video Streaming

    Directory of Open Access Journals (Sweden)

    François-Olivier Devaux

    2009-01-01

    Full Text Available This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.

  20. Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.

    Science.gov (United States)

    Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen

    2018-07-01

    Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).

  1. High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS

    Science.gov (United States)

    Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian

    2017-09-01

    In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.

  2. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  3. Transformation-based exploration of data parallel architecture for customizable hardware : a JPEG encoder case study

    NARCIS (Netherlands)

    Corvino, R.; Diken, E.; Gamatié, A.; Jozwiak, L.

    2012-01-01

    In this paper, we present a method for the design of MPSoCs for complex data-intensive applications. This method aims at a blend exploration of the communication, the memory system architecture and the computation resource parallelism. The proposed method is exemplified on a JPEG Encoder case study

  4. Computed Quality Assessment of MPEG4-compressed DICOM Video Data.

    Science.gov (United States)

    Frankewitsch, Thomas; Söhnlein, Sven; Müller, Marcel; Prokosch, Hans-Ulrich

    2005-01-01

    Digital Imaging and Communication in Medicine (DICOM) has become one of the most popular standards in medicine. This standard specifies the exact procedures in which digital images are exchanged between devices, either using a network or storage medium. Sources for images vary; therefore there exist definitions for the exchange for CR, CT, NMR, angiography, sonography and so on. With its spreading, with the increasing amount of sources included, data volume is increasing, too. This affects storage and traffic. While for long-time storage data compression is generally not accepted at the moment, there are many situations where data compression is possible: Telemedicine for educational purposes (e.g. students at home using low speed internet connections), presentations with standard-resolution video projectors, or even the supply on wards combined receiving written findings. DICOM comprises compression: for still image there is JPEG, for video MPEG-2 is adopted. Within the last years MPEG-2 has been evolved to MPEG-4, which squeezes data even better, but the risk of significant errors increases, too. Within the last years effects of compression have been analyzed for entertainment movies, but these are not comparable to videos of physical examinations (e.g. echocardiography). In medical videos an individual image plays a more important role. Erroneous single images affect total quality even more. Additionally, the effect of compression can not be generalized from one test series to all videos. The result depends strongly on the source. Some investigations have been presented, where different MPEG-4 algorithms compressed videos have been compared and rated manually. But they describe only the results in an elected testbed. In this paper some methods derived from video rating are presented and discussed for an automatically created quality control for the compression of medical videos, primary stored in DICOM containers.

  5. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  6. Video steganography based on bit-plane decomposition of wavelet-transformed video

    Science.gov (United States)

    Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji

    2004-06-01

    This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.

  7. A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    Directory of Open Access Journals (Sweden)

    R. Gomathi

    2014-01-01

    Full Text Available This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.

  8. Building a Steganography Program Including How to Load, Process, and Save JPEG and PNG Files in Java

    Science.gov (United States)

    Courtney, Mary F.; Stix, Allen

    2006-01-01

    Instructors teaching beginning programming classes are often interested in exercises that involve processing photographs (i.e., files stored as .jpeg). They may wish to offer activities such as color inversion, the color manipulation effects archived with pixel thresholding, or steganography, all of which Stevenson et al. [4] assert are sought by…

  9. Optimum image compression rate maintaining diagnostic image quality of digital intraoral radiographs

    International Nuclear Information System (INIS)

    Song, Ju Seop; Koh, Kwang Joon

    2000-01-01

    The aims of the present study are to determine the optimum compression rate in terms of file size reduction and diagnostic quality of the images after compression and evaluate the transmission speed of original or each compressed images. The material consisted of 24 extracted human premolars and molars. The occlusal surfaces and proximal surfaces of the teeth had a clinical disease spectrum that ranged from sound to varying degrees of fissure discoloration and cavitation. The images from Digora system were exported in TIFF and the images from conventional intraoral film were scanned and digitalized in TIFF by Nikon SF-200 scanner(Nikon, Japan). And six compression factors were chosen and applied on the basis of the results from a pilot study. The total number of images to be assessed were 336. Three radiologists assessed the occlusal and proximal surfaces of the teeth with 5-rank scale. Finally diagnosed as either sound or carious lesion by one expert oral pathologist. And sensitivity and specificity and kappa value for diagnostic agreement was calculated. Also the area (Az) values under the ROC curve were calculated and paired t-test and oneway ANOVA test was performed. Thereafter, transmission time of the image files of the each compression level were compared with that of the original image files. No significant difference was found between original and the corresponding images up to 7% (1:14) compression ratio for both the occlusal and proximal caries (p<0.05). JPEG3 (1:14) image files are transmitted fast more than 10 times, maintained diagnostic information in image, compared with original image files. 1:14 compressed image file may be used instead of the original image and reduce storage needs and transmission time.

  10. An FPGA-Based People Detection System

    Directory of Open Access Journals (Sweden)

    James J. Clark

    2005-05-01

    Full Text Available This paper presents an FPGA-based system for detecting people from video. The system is designed to use JPEG-compressed frames from a network camera. Unlike previous approaches that use techniques such as background subtraction and motion detection, we use a machine-learning-based approach to train an accurate detector. We address the hardware design challenges involved in implementing such a detector, along with JPEG decompression, on an FPGA. We also present an algorithm that efficiently combines JPEG decompression with the detection process. This algorithm carries out the inverse DCT step of JPEG decompression only partially. Therefore, it is computationally more efficient and simpler to implement, and it takes up less space on the chip than the full inverse DCT algorithm. The system is demonstrated on an automated video surveillance application and the performance of both hardware and software implementations is analyzed. The results show that the system can detect people accurately at a rate of about 2.5 frames per second on a Virtex-II 2V1000 using a MicroBlaze processor running at 75 MHz, communicating with dedicated hardware over FSL links.

  11. TIFF, GIF, and PNG: get the picture?

    Science.gov (United States)

    Kabachinski, Jeff

    2007-01-01

    GIF, JPEG, and PNG are most likely the best formats to use for three reasons. First, they're standardized and open formats for anyone to use. In addition, JPEG is an ISO standard and PNG is an IETF RFC (Internet Engineering Task Force Request for Comments-www.ietf.org) and W3C recommendation (World Wide Web Consortium-www.w3.org). Second, they're compressible. GIF files are generally compressed at 5:1, JPEG at 10:1 or 20:1 and PNG at about 7:1. Finally, they're all supported by web browsers. Well, pretty much. Microsoft's Internet Explorer doesn't support the alpha channel transparency for PNG-but, on the other hand, GIF and JPEG don't have the alpha channel at all. Use TIFF to archive your original pictures as it is a lossless format. Check out the summary table and sidebar for more information regarding these picture file formats.

  12. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    Directory of Open Access Journals (Sweden)

    J. Schindler

    2011-01-01

    Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

  13. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Rung-Ching Chen

    2009-01-01

    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  14. Energy Efficiency of Task Allocation for Embedded JPEG Systems

    Directory of Open Access Journals (Sweden)

    Yang-Hsin Fan

    2014-01-01

    Full Text Available Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin.

  15. Energy efficiency of task allocation for embedded JPEG systems.

    Science.gov (United States)

    Fan, Yang-Hsin; Wu, Jan-Ou; Wang, San-Fu

    2014-01-01

    Embedded system works everywhere for repeatedly performing a few particular functionalities. Well-known products include consumer electronics, smart home applications, and telematics device, and so forth. Recently, developing methodology of embedded systems is applied to conduct the design of cloud embedded system resulting in the applications of embedded system being more diverse. However, the more energy consumes result from the more embedded system works. This study presents hyperrectangle technology (HT) to embedded system for obtaining energy saving. The HT adopts drift effect to construct embedded systems with more hardware circuits than software components or vice versa. It can fast construct embedded system with a set of hardware circuits and software components. Moreover, it has a great benefit to fast explore energy consumption for various embedded systems. The effects are presented by assessing a JPEG benchmarks. Experimental results demonstrate that the HT, respectively, achieves the energy saving by 29.84%, 2.07%, and 68.80% on average to GA, GHO, and Lin.

  16. A novel strategy to access high resolution DICOM medical images based on JPEG2000 interactive protocol

    Science.gov (United States)

    Tian, Yuan; Cai, Weihua; Sun, Jianyong; Zhang, Jianguo

    2008-03-01

    The demand for sharing medical information has kept rising. However, the transmission and displaying of high resolution medical images are limited if the network has a low transmission speed or the terminal devices have limited resources. In this paper, we present an approach based on JPEG2000 Interactive Protocol (JPIP) to browse high resolution medical images in an efficient way. We designed and implemented an interactive image communication system with client/server architecture and integrated it with Picture Archiving and Communication System (PACS). In our interactive image communication system, the JPIP server works as the middleware between clients and PACS servers. Both desktop clients and wireless mobile clients can browse high resolution images stored in PACS servers via accessing the JPIP server. The client can only make simple requests which identify the resolution, quality and region of interest and download selected portions of the JPEG2000 code-stream instead of downloading and decoding the entire code-stream. After receiving a request from a client, the JPIP server downloads the requested image from the PACS server and then responds the client by sending the appropriate code-stream. We also tested the performance of the JPIP server. The JPIP server runs stably and reliably under heavy load.

  17. Reconfigurable Secure Video Codec Based on DWT and AES Processor

    Directory of Open Access Journals (Sweden)

    Rached Tourki

    2010-01-01

    Full Text Available In this paper, we proposed a secure video codec based on the discrete wavelet transformation (DWT and the Advanced Encryption Standard (AES processor. Either, use of video coding with DWT or encryption using AES is well known. However, linking these two designs to achieve secure video coding is leading. The contributions of our work are as follows. First, a new method for image and video compression is proposed. This codec is a synthesis of JPEG and JPEG2000,which is implemented using Huffman coding to the JPEG and DWT to the JPEG2000. Furthermore, an improved motion estimation algorithm is proposed. Second, the encryptiondecryption effects are achieved by the AES processor. AES is aim to encrypt group of LL bands. The prominent feature of this method is an encryption of LL bands by AES-128 (128-bit keys, or AES-192 (192-bit keys, or AES-256 (256-bit keys.Third, we focus on a method that implements partial encryption of LL bands. Our approach provides considerable levels of security (key size, partial encryption, mode encryption, and has very limited adverse impact on the compression efficiency. The proposed codec can provide up to 9 cipher schemes within a reasonable software cost. Latency, correlation, PSNR and compression rate results are analyzed and shown.

  18. A novel JPEG steganography method based on modulus function with histogram analysis

    Directory of Open Access Journals (Sweden)

    V. Banoci

    2012-06-01

    Full Text Available In this paper, we present a novel steganographic method for embedding of secret data in still grayscale JPEG image. In order to provide large capacity of the proposed method while maintaining good visual quality of stego-image, the embedding process is performed in quantized transform coefficients of Discrete Cosine transform (DCT by modifying coefficients according to modulo function, what gives to the steganography system blind extraction predisposition. After-embedding histogram of proposed Modulo Histogram Fitting (MHF method is analyzed to secure steganography system against steganalysis attacks. In addition, AES ciphering was implemented to increase security and improve histogram after-embedding characteristics of proposed steganography system as experimental results show.

  19. Detect Image Tamper by Semi-Fragile Digital Watermarking

    Institute of Scientific and Technical Information of China (English)

    LIUFeilong; WANGYangsheng

    2004-01-01

    To authenticate the integrity of image while resisting some valid image processing such as JPEG compression, a semi-fragile image watermarking is described. Image name, one of the image features, has been used as the key of pseudo-random function to generate the special watermarks for the different image. Watermarks are embedded by changing the relationship between the blocks' DCT DC coefficients, and the image tamper are detected with the relationship of these DCT DC coefficients.Experimental results show that the proposed technique can resist JPEG compression, and detect image tamper in the meantime.

  20. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  1. A New RTL Design Approach for a DCT/IDCT-Based Image Compression Architecture using the mCBE Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2012-09-01

    Full Text Available In the literature, several approaches of designing a DCT/IDCT-based image compression system have been proposed. In this paper, we present a new RTL design approach with as main focus developing a DCT/IDCT-based image compression architecture using a self-created algorithm. This algorithm can efficiently minimize the amount of shifter-adders to substitute multipliers. We call this new algorithm the multiplication from Common Binary Expression (mCBE Algorithm. Besides this algorithm, we propose alternative quantization numbers, which can be implemented simply as shifters in digital hardware. Mostly, these numbers can retain a good compressed-image quality compared to JPEG recommendations. These ideas lead to our design being small in circuit area, multiplierless, and low in complexity. The proposed 8-point 1D-DCT design has only six stages, while the 8-point 1D-IDCT design has only seven stages (one stage being defined as equal to the delay of one shifter or 2-input adder. By using the pipelining method, we can achieve a high-speed architecture with latency as a trade-off consideration. The design has been synthesized and can reach a speed of up to 1.41ns critical path delay (709.22MHz.

  2. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  3. Mutual Image-Based Authentication Framework with JPEG2000 in Wireless Environment

    Directory of Open Access Journals (Sweden)

    Ginesu G

    2006-01-01

    Full Text Available Currently, together with the development of wireless connectivity, the need for a reliable and user-friendly authentication system becomes always more important. New applications, as e-commerce or home banking, require a strong level of protection, allowing for verification of legitimate users' identity and enabling the user to distinguis trusted servers from shadow ones. A novel framework for image-based authentication (IBA is then proposed and evaluated. In order to provide mutual authentication, the proposed method integrates an IBA password technique with a challenge-response scheme based on a shared secret key for image scrambling. The wireless environment is mainly addressed by the proposed system, which tries to overcome the severe constraints on security, data transmission capability, and user friendliness imposed by such environment. In order to achieve such results, the system offers a strong solution for authentication, taking into account usability and avoiding the need for hardware upgrades. Data and application scalability is provided through the JPEG2000 standard and JPIP framework.

  4. Effects of compression and individual variability on face recognition performance

    Science.gov (United States)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  5. Effect of image compression and scaling on automated scoring of immunohistochemical stainings and segmentation of tumor epithelium

    Directory of Open Access Journals (Sweden)

    Konsti Juho

    2012-03-01

    Full Text Available Abstract Background Digital whole-slide scanning of tissue specimens produces large images demanding increasing storing capacity. To reduce the need of extensive data storage systems image files can be compressed and scaled down. The aim of this article is to study the effect of different levels of image compression and scaling on automated image analysis of immunohistochemical (IHC stainings and automated tumor segmentation. Methods Two tissue microarray (TMA slides containing 800 samples of breast cancer tissue immunostained against Ki-67 protein and two TMA slides containing 144 samples of colorectal cancer immunostained against EGFR were digitized with a whole-slide scanner. The TMA images were JPEG2000 wavelet compressed with four compression ratios: lossless, and 1:12, 1:25 and 1:50 lossy compression. Each of the compressed breast cancer images was furthermore scaled down either to 1:1, 1:2, 1:4, 1:8, 1:16, 1:32, 1:64 or 1:128. Breast cancer images were analyzed using an algorithm that quantitates the extent of staining in Ki-67 immunostained images, and EGFR immunostained colorectal cancer images were analyzed with an automated tumor segmentation algorithm. The automated tools were validated by comparing the results from losslessly compressed and non-scaled images with results from conventional visual assessments. Percentage agreement and kappa statistics were calculated between results from compressed and scaled images and results from lossless and non-scaled images. Results Both of the studied image analysis methods showed good agreement between visual and automated results. In the automated IHC quantification, an agreement of over 98% and a kappa value of over 0.96 was observed between losslessly compressed and non-scaled images and combined compression ratios up to 1:50 and scaling down to 1:8. In automated tumor segmentation, an agreement of over 97% and a kappa value of over 0.93 was observed between losslessly compressed images and

  6. Data compression considerations for detectors with local intelligence

    International Nuclear Information System (INIS)

    Garcia-Sciveres, M; Wang, X

    2014-01-01

    This note summarizes the outcome of discussions about how data compression considerations apply to tracking detectors with local intelligence. The method for analyzing data compression efficiency is taken from a previous publication and applied to module characteristics from the WIT2014 workshop. We explore local intelligence and coupled layer structures in the language of data compression. In this context the original intelligent tracker concept of correlating hits to find matches of interest and discard others is just a form of lossy data compression. We now explore how these features (intelligence and coupled layers) can be exploited for lossless compression, which could enable full readout at higher trigger rates than previously envisioned, or even triggerless

  7. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  8. JPEG2000-coded image error concealment exploiting convex sets projections.

    Science.gov (United States)

    Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio

    2005-04-01

    Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.

  9. JPIP proxy server with prefetching strategies based on user-navigation model and semantic map

    OpenAIRE

    Monteagudo Pereira, José Lino

    2013-01-01

    The efficient transmission of large resolution images and, in particular, the interactive transmission of images in a client-server scenario, is an important aspect for many applications. Among the current image compression standards, JPEG2000 excels for its interactive transmission capabilities. In general, three mechanisms are employed to optimize the transmission of images when using the JPEG2000 Interactive Protocol (JPIP): 1) packet re-sequencing at the server; 2) prefetching at the clie...

  10. Multiple descriptions based on multirate coding for JPEG 2000 and H.264/AVC.

    Science.gov (United States)

    Tillo, Tammam; Baccaglini, Enrico; Olmo, Gabriella

    2010-07-01

    Multiple description coding (MDC) makes use of redundant representations of multimedia data to achieve resiliency. Descriptions should be generated so that the quality obtained when decoding a subset of them only depends on their number and not on the particular received subset. In this paper, we propose a method based on the principle of encoding the source at several rates, and properly blending the data encoded at different rates to generate the descriptions. The aim is to achieve efficient redundancy exploitation, and easy adaptation to different network scenarios by means of fine tuning of the encoder parameters. We apply this principle to both JPEG 2000 images and H.264/AVC video data. We consider as the reference scenario the distribution of contents on application-layer overlays with multiple-tree topology. The experimental results reveal that our method favorably compares with state-of-art MDC techniques.

  11. Effect of the rate of chest compression familiarised in previous training on the depth of chest compression during metronome-guided cardiopulmonary resuscitation: a randomised crossover trial

    OpenAIRE

    Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo

    2016-01-01

    Objectives To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Design Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Setting Participants were recruited from a medical school and two paramedic schools of South Korea. Participants 42 senior students of a medical school and two pa...

  12. Camera-Model Identification Using Markovian Transition Probability Matrix

    Science.gov (United States)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  13. Reconfigurable Secure Video Codec Based on DWT and AES Processor

    OpenAIRE

    Rached Tourki; M. Machhout; B. Bouallegue; M. Atri; M. Zeghid; D. Dia

    2010-01-01

    In this paper, we proposed a secure video codec based on the discrete wavelet transformation (DWT) and the Advanced Encryption Standard (AES) processor. Either, use of video coding with DWT or encryption using AES is well known. However, linking these two designs to achieve secure video coding is leading. The contributions of our work are as follows. First, a new method for image and video compression is proposed. This codec is a synthesis of JPEG and JPEG2000,which is implemented using Huffm...

  14. Atomic effect algebras with compression bases

    International Nuclear Information System (INIS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  15. Normalized compression distance of multisets with applications

    NARCIS (Netherlands)

    Cohen, A.R.; Vitányi, P.M.B.

    Pairwise normalized compression distance (NCD) is a parameter-free, feature-free, alignment-free, similarity metric based on compression. We propose an NCD of multisets that is also metric. Previously, attempts to obtain such an NCD failed. For classification purposes it is superior to the pairwise

  16. Compressive multi-mode superresolution display

    KAUST Repository

    Heide, Felix

    2014-01-01

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image. © 2014 Optical Society of America.

  17. Thermal compression modulus of polarized neutron matter

    International Nuclear Information System (INIS)

    Abd-Alla, M.

    1990-05-01

    We applied the equation of state for pure polarized neutron matter at finite temperature, calculated previously, to calculate the compression modulus. The compression modulus of pure neutron matter at zero temperature is very large and reflects the stiffness of the equation of state. It has a little temperature dependence. Introducing the spin excess parameter in the equation of state calculations is important because it has a significant effect on the compression modulus. (author). 25 refs, 2 tabs

  18. An HVS-based location-sensitive definition of mutual information between two images

    Science.gov (United States)

    Zhu, Haijun; Wu, Huayi

    2006-10-01

    Quantitative measure of image information amount is of great importance in many image processing applications, e.g. image compression and image registration. Many commonly used metrics are defined mathematically. However, the ultimate consumers of images are human observers in most situations, thus such measures without consideration of internal mechanism of human visual system (HVS) may not be appropriate. This paper proposes an improved definition of mutual information between two images based on the visual information which is actually perceived by human beings in different subbands of image. This definition is both sensitive to the pixels' spatial location and correlates well with human perceptual feeling than mutual information purely calculated by pixels' grayscale value. Experimental results on images with different noises and JPEG&JPEG2000 compressed images are also given.

  19. Subjective Evaluation of Audiovisual Signals

    Directory of Open Access Journals (Sweden)

    F. Fikejz

    2010-01-01

    Full Text Available This paper deals with subjective evaluation of audiovisual signals, with emphasis on the interaction between acoustic and visual quality. The subjective test is realized by a simple rating method. The audiovisual signal used in this test is a combination of images compressed by JPEG compression codec and sound samples compressed by MPEG-1 Layer III. Images and sounds have various contents. It simulates a real situation when the subject listens to compressed music and watches compressed pictures without the access to original, i.e. uncompressed signals.

  20. Helioviewer.org: Browsing Very Large Image Archives Online Using JPEG 2000

    Science.gov (United States)

    Hughitt, V. K.; Ireland, J.; Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Schmidt, L.; Wamsler, B.; Beck, J.; Alexanderian, A.; Fleck, B.

    2009-12-01

    As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Recent efforts have resulted in increased performance, dynamic movie generation, and improved support for mobile web browsers. Future functionality will include: support for additional data-sources including RHESSI, SDO, STEREO, and TRACE, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.

  1. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  2. Compression of surface myoelectric signals using MP3 encoding.

    Science.gov (United States)

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  3. An efficient and extensible approach for compressing phylogenetic trees

    KAUST Repository

    Matthews, Suzanne J; Williams, Tiffani L

    2011-01-01

    Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend

  4. Terminology: resistance or stiffness for medical compression stockings?

    Directory of Open Access Journals (Sweden)

    André Cornu-Thenard

    2013-04-01

    Full Text Available Based on previous experimental work with medical compression stockings it is proposed to restrict the term stiffness to measurements on the human leg and rather to speak about resistance when it comes to characterize the elastic property of compression hosiery in the textile laboratory.

  5. Interactive Editing of GigaSample Terrain Fields

    KAUST Repository

    Treib, Marc

    2012-05-01

    Previous terrain rendering approaches have addressed the aspect of data compression and fast decoding for rendering, but applications where the terrain is repeatedly modified and needs to be buffered on disk have not been considered so far. Such applications require both decoding and encoding to be faster than disk transfer. We present a novel approach for editing gigasample terrain fields at interactive rates and high quality. To achieve high decoding and encoding throughput, we employ a compression scheme for height and pixel maps based on a sparse wavelet representation. On recent GPUs it can encode and decode up to 270 and 730 MPix/s of color data, respectively, at compression rates and quality superior to JPEG, and it achieves more than twice these rates for lossless height field compression. The construction and rendering of a height field triangulation is avoided by using GPU ray-casting directly on the regular grid underlying the compression scheme. We show the efficiency of our method for interactive editing and continuous level-of-detail rendering of terrain fields comprised of several hundreds of gigasamples. © 2012 The Author(s).

  6. New approach development for solution of cloning results detection problem in lossy saved digital image

    Directory of Open Access Journals (Sweden)

    A.A. Kobozeva

    2016-09-01

    Full Text Available The problem of detection of the digital image falsification results performed by cloning is considered – one of the most often used program tools implemented in all modern graphic editors. Aim: The aim of the work is further development of approach to the solution of a cloning detection problem having the cloned image saved in a lossy format, offered by authors earlier. Materials and Methods: Further development of a new approach to the solution of a problem of cloning results detection in the digital image is presented. Approach is based on the accounting of small changes of cylindrical body volume with the generatrix, that is parallel to the OZ axis, bounded above by the interpolating function plot for a matrix of brightness of the analyzed image, and bounded below by the XOY plane, during the compression process. Results: Adaptation of the offered approach to conditions of the cloned image compression with the arbitrary factor of compression quality is carried out (compression ratio. The approach solvency in the conditions of the cloned image compression according to the algorithms different from the JPEG standard is shown: JPEG2000, compression with use of low-rank approximations of the image matrix (matrix blocks. The results of computational experiment are given. It is shown that the developed approach can be used to detect the results of cloning in digital video in the conditions of lossy compression after cloning process.

  7. Approaching maximal performance of longitudinal beam compression in induction accelerator drivers

    International Nuclear Information System (INIS)

    Mark, J.W.K.; Ho, D.D.M.; Brandon, S.T.; Chang, C.L.; Drobot, A.T.; Faltens, A.; Lee, E.P.; Krafft, G.A.

    1986-01-01

    Longitudinal beam compression is an integral part of the US induction accelerator development effort for heavy ion fusion. Producing maximal performance for key accelerator components is an essential element of the effort to reduce driver costs. We outline here initial studies directed towards defining the limits of final beam compression including considerations such as: maximal available compression, effects of longitudinal dispersion and beam emittance, combining pulse-shaping with beam compression to reduce the total number of beam manipulations, etc. The use of higher ion charge state Z greater than or equal to 3 is likely to test the limits of the previously envisaged beam compression and final focus hardware. A more conservative approach is to use additional beamlets in final compression and focus. On the other end of the spectrum of choices, alternate approaches might consider new final focus with greater tolerances for systematic momentum and current variations. Development of such final focus concepts would also allow more compact (and hopefully cheaper) hardware packages where the previously separate processes of beam compression, pulse-shaping and final focus occur as partially combined and nearly concurrent beam manipulations

  8. Adaptive Watermarking Scheme Using Biased Shift of Quantization Index

    Directory of Open Access Journals (Sweden)

    Young-Ho Seo

    2010-01-01

    Full Text Available We propose a watermark embedding and extracting method for blind watermarking. It uses the characteristics of a scalar quantizer to comply with the recommendation in JPEG, MPEG series, or JPEG2000. Our method performs embedding of a watermark bit by shifting the corresponding frequency transform coefficient (the watermark position to a quantization index according to the value of the watermark bit, which prevents from losing the watermark information during the data compression process. The watermark can be embedded simultaneously to the quantization process without an additional process for watermarking, which means it can be performed at the same speed to the compression process. In the embedding process, a Linear Feedback Shift Register (LFSR is used to hide the watermark informations and the watermark positions. The experimental results showed that the proposed method satisfies enough robustness and imperceptibility that are the major requirements for watermarking.

  9. Shock absorbing properties of toroidal shells under compression, 3

    International Nuclear Information System (INIS)

    Sugita, Yuji

    1985-01-01

    The author has previously presented the static load-deflection relations of a toroidal shell subjected to axisymmetric compression between rigid plates and those of its outer half when subjected to lateral compression. In both these cases, the analytical method was based on the incremental Rayleigh-Ritz method. In this paper, the effects of compression angle and strain rate on the load-deflection relations of the toroidal shell are investigated for its use as a shock absorber for the radioactive material shipping cask which must keep its structural integrity even after accidental falls at any angle. Static compression tests have been carried out at four angles of compression, 10 0 , 20 0 , 50 0 , 90 0 and the applications of the preceding analytical method have been discussed. Dynamic compression tests have also been performed using the free-falling drop hammer. The results are compared with those in the static compression tests. (author)

  10. Poor chest compression quality with mechanical compressions in simulated cardiopulmonary resuscitation: a randomized, cross-over manikin study.

    Science.gov (United States)

    Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob

    2011-10-01

    Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  12. JHelioviewer: Taming The Torrent Of SDO Data

    Science.gov (United States)

    Mueller, Daniel; Langenberg, M.; Pagel, S.; Schmidt, L.; Garcia Ortiz, J. P.; Dimitoglou, G.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2010-05-01

    Space missions generate an ever-growing amount of data, as impressively highlighted by the Solar Dynamics Observatory's (SDO) expected return of 1.4 TByte/day. In order to fully exploit their data, scientists need to be able to browse and visualize many different data products spanning a large range of physical length and time scales. So far, the tools available to the scientific community either require downloading all potentially relevant data sets beforehand in their entirety or provide only movies with a fixed resolution and cadence. For SDO, the former approach is prohibitive due to the shear data volume, while the latter does not do justice to the high resolution and cadence of the images. To address this challenge, we have developed JHelioviewer, a JPEG 2000-based visualization and discovery software for solar image data. Using the very efficient lossy compression mode of JPEG 2000, a full-size SDO image can be compressed to 1 MByte at good visual quality for browsing purposes. JHelioviewer will make the vast amount of SDO images available to the worldwide community in this format, which is already being used for all SOHO images. JHelioviewer is a cross-platform application that offers movie streaming, real-time frame-by-frame image processing, feature/event overlays and will enable users to access SDO science data via a VSO interface. JHelioviewer uses the JPEG 2000 Interactive Protocol (JPIP) and OpenGL. The random code stream access of JPIP minimizes data transfer by streaming image data in a region-of-interest and quality-progressive way, while OpenGL enables rapid hardware-accelerated image processing and rendering. Currently focused on solar physics data, JHelioviewer can easily be adapted for use in other areas of space and earth sciences. This poster will illustrate the new and expanded functionality of JHelioviewer and highlight the advantages of JPEG 2000 as a new compression standard for solar image data.

  13. An analysis of the efficacy of bag-valve-mask ventilation and chest compression during different compression-ventilation ratios in manikin-simulated paediatric resuscitation.

    Science.gov (United States)

    Kinney, S B; Tibballs, J

    2000-01-01

    The ideal chest compression and ventilation ratio for children during performance of cardiopulmonary resuscitation (CPR) has not been determined. The efficacy of chest compression and ventilation during compression ventilation ratios of 5:1, 10:2 and 15:2 was examined. Eighteen nurses, working in pairs, were instructed to provide chest compression and bag-valve-mask ventilation for 1 min with each ratio in random on a child-sized manikin. The subjects had been previously taught paediatric CPR within the last 3 or 5 months. The efficacy of ventilation was assessed by measurement of the expired tidal volume and the number of breaths provided. The rate of chest compression was guided by a metronome set at 100/min. The efficacy of chest compressions was assessed by measurement of the rate and depth of compression. There was no significant difference in the mean tidal volume or the percentage of effective chest compressions delivered for each compression-ventilation ratio. The number of breaths delivered was greatest with the ratio of 5:1. The percentage of effective chest compressions was equal with all three methods but the number of effective chest compressions was greatest with a ratio of 5:1. This study supports the use of a compression-ventilation ratio of 5:1 during two-rescuer paediatric cardiopulmonary resuscitation.

  14. Bond graph modeling of centrifugal compression systems

    OpenAIRE

    Uddin, Nur; Gravdahl, Jan Tommy

    2015-01-01

    A novel approach to model unsteady fluid dynamics in a compressor network by using a bond graph is presented. The model is intended in particular for compressor control system development. First, we develop a bond graph model of a single compression system. Bond graph modeling offers a different perspective to previous work by modeling the compression system based on energy flow instead of fluid dynamics. Analyzing the bond graph model explains the energy flow during compressor surge. Two pri...

  15. Image Size Variation Influence on Corrupted and Non-viewable BMP Image

    Science.gov (United States)

    Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah

    2017-08-01

    Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.

  16. Inelastic response of silicon to shock compression.

    Science.gov (United States)

    Higginbotham, A; Stubley, P G; Comley, A J; Eggert, J H; Foster, J M; Kalantar, D H; McGonegle, D; Patel, S; Peacock, L J; Rothman, S D; Smith, R F; Suggit, M J; Wark, J S

    2016-04-13

    The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported 'anomalous' elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. This model is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature.

  17. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  18. Detection of Copy-move Image Modification Using JPEG Compression Model

    Czech Academy of Sciences Publication Activity Database

    Novozámský, Adam; Šorel, Michal

    2018-01-01

    Roč. 283, č. 1 (2018), s. 47-57 ISSN 0379-0738 R&D Projects: GA ČR(CZ) GA16-13830S; GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : Copy-move modification * Forgery * Image tampering * Quantization constraint set Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/novozamsky-0483329.pdf

  19. A Fast DCT Algorithm for Watermarking in Digital Signal Processor

    Directory of Open Access Journals (Sweden)

    S. E. Tsai

    2017-01-01

    Full Text Available Discrete cosine transform (DCT has been an international standard in Joint Photographic Experts Group (JPEG format to reduce the blocking effect in digital image compression. This paper proposes a fast discrete cosine transform (FDCT algorithm that utilizes the energy compactness and matrix sparseness properties in frequency domain to achieve higher computation performance. For a JPEG image of 8×8 block size in spatial domain, the algorithm decomposes the two-dimensional (2D DCT into one pair of one-dimensional (1D DCTs with transform computation in only 24 multiplications. The 2D spatial data is a linear combination of the base image obtained by the outer product of the column and row vectors of cosine functions so that inverse DCT is as efficient. Implementation of the FDCT algorithm shows that embedding a watermark image of 32 × 32 block pixel size in a 256 × 256 digital image can be completed in only 0.24 seconds and the extraction of watermark by inverse transform is within 0.21 seconds. The proposed FDCT algorithm is shown more efficient than many previous works in computation.

  20. Improved waste water vapor compression distillation technology. [for Spacelab

    Science.gov (United States)

    Johnson, K. L.; Nuccio, P. P.; Reveley, W. F.

    1977-01-01

    The vapor compression distillation process is a method of recovering potable water from crewman urine in a manned spacecraft or space station. A description is presented of the research and development approach to the solution of the various problems encountered with previous vapor compression distillation units. The design solutions considered are incorporated in the preliminary design of a vapor compression distillation subsystem. The new design concepts are available for integration in the next generation of support systems and, particularly, the regenerative life support evaluation intended for project Spacelab.

  1. Hugoniot and refractive indices of bromoform under shock compression

    Science.gov (United States)

    Liu, Q. C.; Zeng, X. L.; Zhou, X. M.; Luo, S. N.

    2018-01-01

    We investigate physical properties of bromoform (liquid CHBr3) including compressibility and refractive index under dynamic extreme conditions of shock compression. Planar shock experiments are conducted along with high-speed laser interferometry. Our experiments and previous results establish a linear shock velocity-particle velocity relation for particle velocities below 1.77 km/s, as well as the Hugoniot and isentropic compression curves up to ˜21 GPa. Shock-state refractive indices of CHBr3 up to 2.3 GPa or ˜26% compression, as a function of density, can be described with a linear relation and follows the Gladstone-Dale relation. The velocity corrections for laser interferometry measurements at 1550 nm are also obtained.

  2. Hugoniot and refractive indices of bromoform under shock compression

    Directory of Open Access Journals (Sweden)

    Q. C. Liu

    2018-01-01

    Full Text Available We investigate physical properties of bromoform (liquid CHBr3 including compressibility and refractive index under dynamic extreme conditions of shock compression. Planar shock experiments are conducted along with high-speed laser interferometry. Our experiments and previous results establish a linear shock velocity−particle velocity relation for particle velocities below 1.77 km/s, as well as the Hugoniot and isentropic compression curves up to ∼21 GPa. Shock-state refractive indices of CHBr3 up to 2.3 GPa or ∼26% compression, as a function of density, can be described with a linear relation and follows the Gladstone-Dale relation. The velocity corrections for laser interferometry measurements at 1550 nm are also obtained.

  3. HD Photo: a new image coding technology for digital photography

    Science.gov (United States)

    Srinivasan, Sridhar; Tu, Chengjie; Regunathan, Shankar L.; Sullivan, Gary J.

    2007-09-01

    This paper introduces the HD Photo coding technology developed by Microsoft Corporation. The storage format for this technology is now under consideration in the ITU-T/ISO/IEC JPEG committee as a candidate for standardization under the name JPEG XR. The technology was developed to address end-to-end digital imaging application requirements, particularly including the needs of digital photography. HD Photo includes features such as good compression capability, high dynamic range support, high image quality capability, lossless coding support, full-format 4:4:4 color sampling, simple thumbnail extraction, embedded bitstream scalability of resolution and fidelity, and degradation-free compressed domain support of key manipulations such as cropping, flipping and rotation. HD Photo has been designed to optimize image quality and compression efficiency while also enabling low-complexity encoding and decoding implementations. To ensure low complexity for implementations, the design features have been incorporated in a way that not only minimizes the computational requirements of the individual components (including consideration of such aspects as memory footprint, cache effects, and parallelization opportunities) but results in a self-consistent design that maximizes the commonality of functional processing components.

  4. Analytical modeling of wet compression of gas turbine systems

    International Nuclear Information System (INIS)

    Kim, Kyoung Hoon; Ko, Hyung-Jong; Perez-Blanco, Horacio

    2011-01-01

    Evaporative gas turbine cycles (EvGT) are of importance to the power generation industry because of the potential of enhanced cycle efficiencies with moderate incremental cost. Humidification of the working fluid to result in evaporative cooling during compression is a key operation in these cycles. Previous simulations of this operation were carried out via numerical integration. The present work is aimed at modeling the wet-compression process with approximate analytical solutions instead. A thermodynamic analysis of the simultaneous heat and mass transfer processes that occur during evaporation is presented. The transient behavior of important variables in wet compression such as droplet diameter, droplet mass, gas and droplet temperature, and evaporation rate is investigated. The effects of system parameters on variables such as droplet evaporation time, compressor outlet temperature and input work are also considered. Results from this work exhibit good agreement with those of previous numerical work.

  5. Salary Compression: A Time-Series Ratio Analysis of ARL Position Classifications

    Science.gov (United States)

    Seaman, Scott

    2007-01-01

    Although salary compression has previously been identified in such professional schools as engineering, business, and computer science, there is now evidence of salary compression among Association of Research Libraries members. Using salary data from the "ARL Annual Salary Survey", this study analyzes average annual salaries from 1994-1995…

  6. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  7. Tensile and compressive behavior of Borsic/aluminum

    Science.gov (United States)

    Herakovich, C. T.; Davis, J. G., Jr.; Viswanathan, C. N.

    1977-01-01

    The results of an experimental investigation of the mechanical behavior of Borsic/aluminum are presented. Composite laminates were tested in tension and compression for monotonically increasing load and also for variable loading cycles in which the maximum load was increased in each successive cycle. It is shown that significant strain-hardening, and corresponding increase in yield stress, is exhibited by the metal matrix laminates. For matrix dominated laminates, the current yield stress is essentially identical to the previous maximum stress, and unloading is essentially linear with large permanent strains after unloading. For laminates with fiber dominated behavior, the yield stress increases with increase in the previous maximum stress, but the increase in yield stress does not keep pace with the previous maximum stress. These fiber dominated laminates exhibit smaller nonlinear strains, reversed nonlinear behavior during unloading, and smaller permanent strains after unloading. Compression results from sandwich beams and flat coupons are shown to differ considerably. Results from beam specimens tend to exhibit higher values for modulus, yield stress, and strength.

  8. Neutralized drift compression experiments with a high-intensity ion beam

    International Nuclear Information System (INIS)

    Roy, P.K.; Yu, S.S.; Waldron, W.L.; Anders, A.; Baca, D.; Barnard, J.J.; Bieniosek, F.M.; Coleman, J.; Davidson, R.C.; Efthimion, P.C.; Eylon, S.; Friedman, A.; Gilson, E.P.; Greenway, W.G.; Henestroza, E.; Kaganovich, I.; Leitner, M.; Logan, B.G.; Sefkow, A.B.; Seidl, P.A.; Sharp, W.M.; Thoma, C.; Welch, D.R.

    2007-01-01

    To create high-energy density matter and fusion conditions, high-power drivers, such as lasers, ion beams, and X-ray drivers, may be employed to heat targets with short pulses compared to hydro-motion. Both high-energy density physics and ion-driven inertial fusion require the simultaneous transverse and longitudinal compression of an ion beam to achieve high intensities. We have previously studied the effects of plasma neutralization for transverse beam compression. The scaled experiment, the Neutralized Transport Experiment (NTX), demonstrated that an initially un-neutralized beam can be compressed transversely to ∼1 mm radius when charge neutralization by background plasma electrons is provided. Here, we report longitudinal compression of a velocity-tailored, intense, neutralized 25 mA K + beam at 300 keV. The compression takes place in a 1-2 m drift section filled with plasma to provide space-charge neutralization. An induction cell produces a head-to-tail velocity ramp that longitudinally compresses the neutralized beam, enhances the beam peak current by a factor of 50 and produces a pulse duration of about 3 ns. The physics of longitudinal compression, experimental procedure, and the results of the compression experiments are presented

  9. Adiabatic compression of ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.

    1982-01-01

    A study has been made of the compression of collisionless ion rings in an increasing external magnetic field, B/sub e/ = zB/sub e/(t), by numerically implementing a previously developed kinetic theory of ring compression. The theory is general in that there is no limitation on the ring geometry or the compression ratio, lambdaequivalentB/sub e/ (final)/B/sub e/ (initial)> or =1. However, the motion of a single particle in an equilibrium is assumed to be completely characterized by its energy H and canonical angular momentum P/sub theta/ with the absence of a third constant of the motion. The present computational work assumes that plasma currents are negligible, as is appropriate for a low-temperature collisional plasma. For a variety of initial ring geometries and initial distribution functions (having a single value of P/sub theta/), it is found that the parameters for ''fat'', small aspect ratio rings follow general scaling laws over a large range of compression ratios, 1 3 : The ring radius varies as lambda/sup -1/2/; the average single particle energy as lambda/sup 0.72/; the root mean square energy spread as lambda/sup 1.1/; and the total current as lambda/sup 0.79/. The field reversal parameter is found to saturate at values typically between 2 and 3. For large compression ratios the current density is found to ''hollow out''. This hollowing tends to improve the interchange stability of an embedded low β plasma. The implications of these scaling laws for fusion reactor systems are discussed

  10. Relationship between the Compressive and Tensile Strength of Recycled Concrete

    International Nuclear Information System (INIS)

    El Dalati, R.; Haddad, S.; Matar, P.; Chehade, F.H

    2011-01-01

    Concrete recycling consists of crushing the concrete provided by demolishing the old constructions, and of using the resulted small pieces as aggregates in the new concrete compositions. The resulted aggregates are called recycled aggregates and the new mix of concrete containing a percentage of recycled aggregates is called recycled concrete. Our previous researches have indicated the optimal percentages of recycled aggregates to be used for different cases of recycled concrete related to the original aggregates nature. All results have shown that the concrete compressive strength is significantly reduced when using recycled aggregates. In order to obtain realistic values of compressive strength, some tests have been carried out by adding water-reducer plasticizer and a specified additional quantity of cement. The results have shown that for a limited range of plasticizer percentage, and a fixed value of additional cement, the compressive strength has reached reasonable value. This paper treats of the effect of using recycled aggregates on the tensile strength of concrete, where concrete results from the special composition defined by our previous work. The aim is to determine the relationship between the compressive and tensile strength of recycled concrete. (author)

  11. Direct numerical simulations of premixed autoignition in compressible uniformly-sheared turbulence

    Science.gov (United States)

    Towery, Colin; Darragh, Ryan; Poludnenko, Alexei; Hamlington, Peter

    2017-11-01

    High-speed combustion systems, such as scramjet engines, operate at high temperatures and pressures, extremely short combustor residence times, very high rates of shear stress, and intense turbulent mixing. As a result, the reacting flow can be premixed and have highly-compressible turbulence fluctuations. We investigate the effects of compressible turbulence on the ignition delay time, heat-release-rate (HRR) intermittency, and mode of autoignition of premixed Hydrogen-air fuel in uniformly-sheared turbulence using new three-dimensional direct numerical simulations with a multi-step chemistry mechanism. We analyze autoignition in both the Eulerian and Lagrangian reference frames at eight different turbulence Mach numbers, Mat , spanning the quasi-isentropic, linear thermodynamic, and nonlinear compressibility regimes, with eddy shocklets appearing in the nonlinear regime. Results are compared to our previous study of premixed autoignition in isotropic turbulence at the same Mat and with a single-step reaction mechanism. This previous study found large decreases in delay times and large increases in HRR intermittency between the linear and nonlinear compressibility regimes and that detonation waves could form in both regimes.

  12. Energy Cascade Rate in Compressible Fast and Slow Solar Wind Turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Hadid, L. Z.; Sahraoui, F.; Galtier, S., E-mail: lina.hadid@lpp.polytechnique.fr [LPP, CNRS, Ecole Polytechnique, UPMC Univ Paris 06, Univ. Paris-Sud, Observatoire de Paris, Université Paris-Saclay, Sorbonne Universités, PSL Research University, F-91128 Palaiseau (France)

    2017-03-20

    Estimation of the energy cascade rate in the inertial range of solar wind turbulence has been done so far mostly within incompressible magnetohydrodynamics (MHD) theory. Here, we go beyond that approximation to include plasma compressibility using a reduced form of a recently derived exact law for compressible, isothermal MHD turbulence. Using in situ data from the THEMIS / ARTEMIS spacecraft in the fast and slow solar wind, we investigate in detail the role of the compressible fluctuations in modifying the energy cascade rate with respect to the prediction of the incompressible MHD model. In particular, we found that the energy cascade rate (1) is amplified particularly in the slow solar wind; (2) exhibits weaker fluctuations in spatial scales, which leads to a broader inertial range than the previous reported ones; (3) has a power-law scaling with the turbulent Mach number; (4) has a lower level of spatial anisotropy. Other features of solar wind turbulence are discussed along with their comparison with previous studies that used incompressible or heuristic (nonexact) compressible MHD models.

  13. Energy Cascade Rate in Compressible Fast and Slow Solar Wind Turbulence

    International Nuclear Information System (INIS)

    Hadid, L. Z.; Sahraoui, F.; Galtier, S.

    2017-01-01

    Estimation of the energy cascade rate in the inertial range of solar wind turbulence has been done so far mostly within incompressible magnetohydrodynamics (MHD) theory. Here, we go beyond that approximation to include plasma compressibility using a reduced form of a recently derived exact law for compressible, isothermal MHD turbulence. Using in situ data from the THEMIS / ARTEMIS spacecraft in the fast and slow solar wind, we investigate in detail the role of the compressible fluctuations in modifying the energy cascade rate with respect to the prediction of the incompressible MHD model. In particular, we found that the energy cascade rate (1) is amplified particularly in the slow solar wind; (2) exhibits weaker fluctuations in spatial scales, which leads to a broader inertial range than the previous reported ones; (3) has a power-law scaling with the turbulent Mach number; (4) has a lower level of spatial anisotropy. Other features of solar wind turbulence are discussed along with their comparison with previous studies that used incompressible or heuristic (nonexact) compressible MHD models.

  14. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  15. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  16. Image Compression Based On Wavelet, Polynomial and Quadtree

    Directory of Open Access Journals (Sweden)

    Bushra A. SULTAN

    2011-01-01

    Full Text Available In this paper a simple and fast image compression scheme is proposed, it is based on using wavelet transform to decompose the image signal and then using polynomial approximation to prune the smoothing component of the image band. The architect of proposed coding scheme is high synthetic where the error produced due to polynomial approximation in addition to the detail sub-band data are coded using both quantization and Quadtree spatial coding. As a last stage of the encoding process shift encoding is used as a simple and efficient entropy encoder to compress the outcomes of the previous stage.The test results indicate that the proposed system can produce a promising compression performance while preserving the image quality level.

  17. The impact of mineral composition on compressibility of saturated soils

    OpenAIRE

    Dolinar, Bojana

    2012-01-01

    This article analyses the impact of soils` mineral composition on their compressibility. Physical and chemical properties of minerals which influence the quantity of intergrain water in soils and, consequently, the compressibility of soils are established by considering the previous theoretical findings. Test results obtained on artificially prepared samples are used to determine the analytical relationship between the water content and stress state, depending on the mineralogical properties ...

  18. Soil Compressibility Models for a Wide Stress Range

    KAUST Repository

    Chong, Song-Hun; Santamarina, Carlos

    2016-01-01

    Soil compressibility models with physically correct asymptotic void ratios are required to analyze situations that involve a wide stress range. Previously suggested models and other functions are adapted to satisfy asymptotic void ratios at low

  19. The increase of compressive strength of natural polymer modified concrete with Moringa oleifera

    Science.gov (United States)

    Susilorini, Rr. M. I. Retno; Santosa, Budi; Rejeki, V. G. Sri; Riangsari, M. F. Devita; Hananta, Yan's. Dianaga

    2017-03-01

    Polymer modified concrete is one of some concrete technology innovations to meet the need of strong and durable concrete. Previous research found that Moringa oleifera can be applied as natural polymer modifiers into mortars. Natural polymer modified mortar using Moringa oleifera is proven to increase their compressive strength significantly. In this resesearch, Moringa oleifera seeds have been grinded and added into concrete mix for natural polymer modified concrete, based on the optimum composition of previous research. The research investigated the increase of compressive strength of polymer modified concrete with Moringa oleifera as natural polymer modifiers. There were 3 compositions of natural polymer modified concrete with Moringa oleifera referred to previous research optimum compositions. Several cylinder of 10 cm x 20 cm specimens were produced and tested for compressive strength at age 7, 14, and, 28 days. The research meets conclusions: (1) Natural polymer modified concrete with Moringa oleifera, with and without skin, has higher compressive strength compared to natural polymer modified mortar with Moringa oleifera and also control specimens; (2) Natural polymer modified concrete with Moringa oleifera without skin is achieved by specimens contains Moringa oleifera that is 0.2% of cement weight; and (3) The compressive strength increase of natural polymer modified concrete with Moringa oleifera without skin is about 168.11-221.29% compared to control specimens

  20. Digital echocardiography and telemedicine applications in pediatric cardiology.

    Science.gov (United States)

    Sable, Craig

    2002-01-01

    Digital echocardiography offers several advantages over videotape, including easy review, comparison, storage, postprocessing, and sharing of studies, quantitative analysis, and superior resolution. Newer echocardiography systems can write digital data to computer hardware, whereas older systems require digitization of analog data. Clinical and digital data compression is required to reduce study size. Clinical compression has been validated in several adult studies and one pediatric study. JPEG and MPEG digital compression ratios of 26:1 and 200:1, respectively, approximate S-videotape quality. JPEG is the DICOM 3.0 standard and is ideal for short loops, serial comparisons, and quantitative analysis. MPEG (the motion picture standard) lends itself to digitization of video streams and may be more attractive to pediatric cardiologists. Options for data storage and transfer range from limited local review to multiple offline review stations linked by a wide-area network. Telemedicine expands the capabilities of digital echocardiography in a "store and forward" or "real-time" format. Real-time neonatal telecardiology is accurate, impacts patient care, is cost-effective, and does not increase utilization. Cost, increased reliance on sonographers' skills, lack of accepted standards, and legal, licensure, and billing issues are obstacles to widespread acceptance of digital echocardiography and telemedicine.

  1. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  2. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  4. Splanchnic Compression Improves the Efficacy of Compression Stockings to Prevent Orthostatic Intolerance

    Science.gov (United States)

    Platts, Steven H.; Brown, A. K.; Lee, S. M.; Stenger, M. B.

    2009-01-01

    Purpose: Post-spaceflight orthostatic intolerance (OI) is observed in 20-30% of astronauts. Previous data from our laboratory suggests that this is largely a result of decreased venous return. Currently, NASA astronauts wear an anti-gravity suit (AGS) which consists of inflatable air bladders over the calves, thighs and abdomen, typically pressurized from 26 to 78 mmHg. We recently determined that, thigh-high graded compression stockings (JOBST , 55 mmHg at ankle, 6 mmHg at top of thigh) were effective, though to a lesser degree than the AGS. The purpose of this study was to evaluate the addition of splanchnic compression to prevent orthostatic intolerance. Methods: Ten healthy volunteers (6M, 4F) participated in three 80 head-up tilts on separate days while (1) normovolemic (2) hypovolemic w/ breast-high compression stockings (BS)(JOBST(R), 55 mmHg at the ankle, 6 mmHg at top of thigh, 12 mmHg over abdomen) (3) hypovolemic w/o stockings. Hypovolemia was induced by IV infusion of furosemide (0.5 mg/kg) and 48 hrs of a low salt diet to simulate plasma volume loss following space flight. Hypovolemic testing occurred 24 and 48 hrs after furosemide. One-way repeated measures ANOVA, with Bonferroni corrections, was used to test for differences in blood pressure and heart rate responses to head-up tilt, stand times were compared using a Kaplan-Meyer survival analysis. Results: BS were effective in preventing OI and presyncope in hypovolemic test subjects ( p = 0.015). BS prevented the decrease in systolic blood pressure seen during tilt in normovolemia (p < 0.001) and hypovolemia w/o countermeasure (p = 0.005). BS also prevented the decrease in diastolic blood pressure seen during tilt in normovolemia (p = 0.006) and hypovolemia w/o countermeasure (p = 0.041). Hypovolemia w/o countermeasure showed a higher tilt-induced heart rate increase (p = 0.022) than seen in normovolemia; heart rate while wearing BS was not different than normovolemia (p = 0.353). Conclusion: BS may

  5. Multiband and Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raffaele Pizzolante

    2016-02-01

    Full Text Available Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.. We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.

  6. A stable penalty method for the compressible Navier-Stokes equations: II: One-dimensional domain decomposition schemes

    DEFF Research Database (Denmark)

    Hesthaven, Jan

    1997-01-01

    This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions and as a res......This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions...... and as a result the patching of subdomains is local in space. The scheme is studied in detail for Burgers's equation and developed for the compressible Navier-Stokes equations in general curvilinear coordinates. The versatility of the proposed scheme for the compressible Navier-Stokes equations is illustrated...

  7. Digital storage and analysis of color Doppler echocardiograms

    Science.gov (United States)

    Chandra, S.; Thomas, J. D.

    1997-01-01

    Color Doppler flow mapping has played an important role in clinical echocardiography. Most of the clinical work, however, has been primarily qualitative. Although qualitative information is very valuable, there is considerable quantitative information stored within the velocity map that has not been extensively exploited so far. Recently, many researchers have shown interest in using the encoded velocities to address the clinical problems such as quantification of valvular regurgitation, calculation of cardiac output, and characterization of ventricular filling. In this article, we review some basic physics and engineering aspects of color Doppler echocardiography, as well as drawbacks of trying to retrieve velocities from video tape data. Digital storage, which plays a critical role in performing quantitative analysis, is discussed in some detail with special attention to velocity encoding in DICOM 3.0 (medical image storage standard) and the use of digital compression. Lossy compression can considerably reduce file size with minimal loss of information (mostly redundant); this is critical for digital storage because of the enormous amount of data generated (a 10 minute study could require 18 Gigabytes of storage capacity). Lossy JPEG compression and its impact on quantitative analysis has been studied, showing that images compressed at 27:1 using the JPEG algorithm compares favorably with directly digitized video images, the current goldstandard. Some potential applications of these velocities in analyzing the proximal convergence zones, mitral inflow, and some areas of future development are also discussed in the article.

  8. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  9. Compressed Subsequence Matching and Packed Tree Coloring

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2017-01-01

    We present a new algorithm for subsequence matching in grammar compressed strings. Given a grammar of size n compressing a string of size N and a pattern string of size m over an alphabet of size \\(\\sigma \\), our algorithm uses \\(O(n+\\frac{n\\sigma }{w})\\) space and \\(O(n+\\frac{n\\sigma }{w}+m\\log N\\log...... w\\cdot occ)\\) or \\(O(n+\\frac{n\\sigma }{w}\\log w+m\\log N\\cdot occ)\\) time. Here w is the word size and occ is the number of minimal occurrences of the pattern. Our algorithm uses less space than previous algorithms and is also faster for \\(occ=o(\\frac{n}{\\log N})\\) occurrences. The algorithm uses...... a new data structure that allows us to efficiently find the next occurrence of a given character after a given position in a compressed string. This data structure in turn is based on a new data structure for the tree color problem, where the node colors are packed in bit strings....

  10. Semi-confined compression of microfabricated polymerized biomaterial constructs

    International Nuclear Information System (INIS)

    Moraes, Christopher; Likhitpanichkul, Morakot; Simmons, Craig A; Sun, Yu; Zhao, Ruogang

    2011-01-01

    Mechanical forces are critical parameters in engineering functional tissue because of their established influence on cellular behaviour. However, identifying ideal combinations of mechanical, biomaterial and chemical stimuli to obtain a desired cellular response requires high-throughput screening technologies, which may be realized through microfabricated systems. This paper reports on the development and characterization of a MEMS device for semi-confined biomaterial compression. An array of these devices would enable studies involving mechanical deformation of three-dimensional biomaterials, an important parameter in creating physiologically relevant microenvironments in vitro. The described device has the ability to simultaneously apply a range of compressive mechanical stimuli to multiple polymerized hydrogel microconstructs. Local micromechanical strains generated within the semi-confined hydrogel cylinders are characterized and compared with those produced in current micro- and macroscale technologies. In contrast to previous work generating unconfined compression in microfabricated devices, the semi-confined compression model used in this work generates uniform regions of strain within the central portion of each hydrogel, demonstrated here to range from 20% to 45% across the array. The uniform strains achieved simplify experimental analysis and improve the utility of the compression platform. Furthermore, the system is compatible with a wide variety of polymerizable biomaterials, enhancing device versatility and usability in tissue engineering and fundamental cell biology studies

  11. Optically compressed sensing by under sampling the polar Fourier plane

    International Nuclear Information System (INIS)

    Stern, A; Levi, O; Rivenson, Y

    2010-01-01

    In a previous work we presented a compressed imaging approach that uses a row of rotating sensors to capture indirectly polar strips of the Fourier transform of the image. Here we present further developments of this technique and present new results. The advantages of our technique, compared to other optically compressed imaging techniques, is that its optical implementation is relatively easy, it does not require complicate calibrations and that it can be implemented in near-real time.

  12. Bacterial survival following shock compression in the GigaPascal range

    Science.gov (United States)

    Hazael, Rachael; Fitzmaurice, Brianna C.; Foglia, Fabrizia; Appleby-Thomas, Gareth J.; McMillan, Paul F.

    2017-09-01

    The possibility that life can exist within previously unconsidered habitats is causing us to expand our understanding of potential planetary biospheres. Significant populations of living organisms have been identified at depths extending up to several km below the Earth's surface; whereas laboratory experiments have shown that microbial species can survive following exposure to GigaPascal (GPa) pressures. Understanding the degree to which simple organisms such as microbes survive such extreme pressurization under static compression conditions is being actively investigated. The survival of bacteria under dynamic shock compression is also of interest. Such studies are being partly driven to test the hypothesis of potential transport of biological organisms between planetary systems. Shock compression is also of interest for the potential modification and sterilization of foodstuffs and agricultural products. Here we report the survival of Shewanella oneidensis bacteria exposed to dynamic (shock) compression. The samples examined included: (a) a "wild type" (WT) strain and (b) a "pressure adapted" (PA) population obtained by culturing survivors from static compression experiments to 750 MPa. Following exposure to peak shock pressures of 1.5 and 2.5 GPa the proportion of survivors was established as the number of colony forming units (CFU) present after recovery to ambient conditions. The data were compared with previous results in which the same bacterial samples were exposed to static pressurization to the same pressures, for 15 minutes each. The results indicate that shock compression leads to survival of a significantly greater proportion of both WT and PA organisms. The significantly shorter duration of the pressure pulse during the shock experiments (2-3 μs) likely contributes to the increased survival of the microbial species. One reason for this can involve the crossover from deformable to rigid solid-like mechanical relaxational behavior that occurs for

  13. Two-way shape memory effect induced by repetitive compressive loading cycles

    International Nuclear Information System (INIS)

    Kim, Hyun-Chul; Yoo, Young-Ik; Lee, Jung-Ju

    2009-01-01

    The NiTi alloy can be trained by repetitive loading or heating cycles. As a result of the training, a two-way shape memory effect (TWSME) can be induced. Considerable research has been reported regarding the TWSME trained by tensile loading. However, the TWSME trained by compressive loading has not been investigated nearly as much. In this paper, the TWSME is induced by compressive loading cycles and the two-way shape memory strain is evaluated by using two types of specimen: a solid cylinder type and a tube type. The TWSME trained by compressive loading is different from that trained by tensile loading owing to the severe tension/compression asymmetry as described in previous research. After repetitive compressive loading cycles, strain variation upon cooling is observed, and this result proves that the TWSME is induced by compressive loading cycles. By performing compressive loading cycles, plastic deformation in NiTi alloy occurs more than for tensile loading cycles, which brings about the appearance of TWSME. It can be said that the TWSME is induced by compressive loading cycles more easily. The two-way shape memory strain increases linearly as the maximum strain of compressive loading cycles increases, regardless of the shape and the size of the NiTi alloy; this two-way shape memory strain then shows a tendency towards saturation after some repeated cycles

  14. Composition-Structure-Property Relations of Compressed Borosilicate Glasses

    Science.gov (United States)

    Svenson, Mouritz N.; Bechgaard, Tobias K.; Fuglsang, Søren D.; Pedersen, Rune H.; Tjell, Anders Ø.; Østergaard, Martin B.; Youngman, Randall E.; Mauro, John C.; Rzoska, Sylwester J.; Bockowski, Michal; Smedskjaer, Morten M.

    2014-08-01

    Hot isostatic compression is an interesting method for modifying the structure and properties of bulk inorganic glasses. However, the structural and topological origins of the pressure-induced changes in macroscopic properties are not yet well understood. In this study, we report on the pressure and composition dependences of density and micromechanical properties (hardness, crack resistance, and brittleness) of five soda-lime borosilicate glasses with constant modifier content, covering the extremes from Na-Ca borate to Na-Ca silicate end members. Compression experiments are performed at pressures ≤1.0 GPa at the glass transition temperature in order to allow processing of large samples with relevance for industrial applications. In line with previous reports, we find an increasing fraction of tetrahedral boron, density, and hardness but a decreasing crack resistance and brittleness upon isostatic compression. Interestingly, a strong linear correlation between plastic (irreversible) compressibility and initial trigonal boron content is demonstrated, as the trigonal boron units are the ones most disposed for structural and topological rearrangements upon network compaction. A linear correlation is also found between plastic compressibility and the relative change in hardness with pressure, which could indicate that the overall network densification is responsible for the increase in hardness. Finally, we find that the micromechanical properties exhibit significantly different composition dependences before and after pressurization. The findings have important implications for tailoring microscopic and macroscopic structures of glassy materials and thus their properties through the hot isostatic compression method.

  15. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  16. Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery

    Science.gov (United States)

    Xie, Hua; Klimesh, Matthew A.

    2009-01-01

    This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.

  17. The VELOCE pulsed power generator for isentropic compression experiments

    Energy Technology Data Exchange (ETDEWEB)

    Ao, Tommy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Asay, James Russell [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Chantrenne, Sophie J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Hickman, Randall John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Willis, Michael David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Shay, Andrew W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Grine-Jones, Suzi A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Hall, Clint Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Dynamic Material Properties; Baer, Melvin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Engineering Sciences Center

    2007-12-01

    Veloce is a medium-voltage, high-current, compact pulsed power generator developed for isentropic and shock compression experiments. Because of its increased availability and ease of operation, Veloce is well suited for studying isentropic compression experiments (ICE) in much greater detail than previously allowed with larger pulsed power machines such as the Z accelerator. Since the compact pulsed power technology used for dynamic material experiments has not been previously used, it is necessary to examine several key issues to ensure that accurate results are obtained. In the present experiments, issues such as panel and sample preparation, uniformity of loading, and edge effects were extensively examined. In addition, magnetohydrodynamic (MHD) simulations using the ALEGRA code were performed to interpret the experimental results and to design improved sample/panel configurations. Examples of recent ICE studies on aluminum are presented.

  18. Modelling and simulation of the compressible turbulence in supersonic shear flows

    International Nuclear Information System (INIS)

    Guezengar, Dominique

    1997-02-01

    This research thesis addresses the modelling of some specific physical problems of fluid mechanics: compressibility (issue of mixing layers), large variations of volumetric mass (boundary layers), and anisotropy (compression ramps). After a presentation of the chosen physical modelling and numerical approximation, the author pays attention to flows at the vicinity of a wall, and to boundary conditions. The next part addresses existing compressibility models and their application to the calculation of supersonic mixing layers. A critical assessment is also performed through calculations of boundary layers and of compression ramps. The next part addresses problems related to large variations of volumetric mass which are not taken by compressibility models into account. A modification is thus proposed for the diffusion term, and is tested for the case of supersonic boundary layers and of mixing layers with high density rates. Finally, anisotropy effects are addressed through the implementation of Explicit Algebraic Stress k-omega Turbulence models (EARSM), and their tests on previously studied cases [fr

  19. comparative analysis of the compressive strength of hollow

    African Journals Online (AJOL)

    user

    2016-04-02

    Apr 2, 2016 ... Previous analysis showed that cavity size and number on one hand and combinations thickness affect the compressive strength of hollow sandcrete blocks. Series arrangement of the cavities is common but parallel arrangement has been recommended. This research performed a comparative analysis of ...

  20. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  1. Effective radiation attenuation calibration for breast density: compression thickness influences and correction

    OpenAIRE

    Thomas Jerry A; Cao Ke; Heine John J

    2010-01-01

    Abstract Background Calibrating mammograms to produce a standardized breast density measurement for breast cancer risk analysis requires an accurate spatial measure of the compressed breast thickness. Thickness inaccuracies due to the nominal system readout value and compression paddle orientation induce unacceptable errors in the calibration. Method A thickness correction was developed and evaluated using a fully specified two-component surrogate breast model. A previously developed calibrat...

  2. Image Quality Assessment for Different Wavelet Compression Techniques in a Visual Communication Framework

    Directory of Open Access Journals (Sweden)

    Nuha A. S. Alwan

    2013-01-01

    Full Text Available Images with subband coding and threshold wavelet compression are transmitted over a Rayleigh communication channel with additive white Gaussian noise (AWGN, after quantization and 16-QAM modulation. A comparison is made between these two types of compression using both mean square error (MSE and structural similarity (SSIM image quality assessment (IQA criteria applied to the reconstructed image at the receiver. The two methods yielded comparable SSIM but different MSE measures. In this work, we justify our results which support previous findings in the literature that the MSE between two images is not indicative of structural similarity or the visibility of errors. It is found that it is difficult to reduce the pointwise errors in subband-compressed images (higher MSE. However, the compressed images provide comparable SSIM or perceived quality for both types of compression provided that the retained energy after compression is the same.

  3. An efficient and extensible approach for compressing phylogenetic trees.

    Science.gov (United States)

    Matthews, Suzanne J; Williams, Tiffani L

    2011-10-18

    Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference. On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings. TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community.

  4. Dynamic compressibility of air in porous structures at audible frequencies

    DEFF Research Database (Denmark)

    Lafarge, Denis; Lemarinier, Pavel; Allard, Jean F.

    1997-01-01

    Measurements of dynamic compressibility of air-filled porous sound-absorbing materials are compared with predictions involving two parametere, the static thermal permeability k'_0 and the thermal characteristic dimension GAMMA'. Emphasis on the notion of dynamic and static thermal permeability...... of the viscous forces. Using both parameters, a simple model is constructed for the dynamic thermal permeability k', which is completely analogous to the Johnson et al. [J. Fluid Mech. vol. 176, 379 (1987)] model of dynamic viscous permeability k. The resultant modeling of dynamic compressibility provides...... predictions which are closer to the experimental results than the previously used simpler model where the compressibility is the same as in identical circular cross-sectional shaped pores, or distributions of slits, related to a given GAMMA'....

  5. Video segmentation for post-production

    Science.gov (United States)

    Wills, Ciaran

    2001-12-01

    Specialist post-production is an industry that has much to gain from the application of content-based video analysis techniques. However the types of material handled in specialist post-production, such as television commercials, pop music videos and special effects are quite different in nature from the typical broadcast material which many video analysis techniques are designed to work with; shots are short and highly dynamic, and the transitions are often novel or ambiguous. We address the problem of scene change detection and develop a new algorithm which tackles some of the common aspects of post-production material that cause difficulties for past algorithms, such as illumination changes and jump cuts. Operating in the compressed domain on Motion JPEG compressed video, our algorithm detects cuts and fades by analyzing each JPEG macroblock in the context of its temporal and spatial neighbors. Analyzing the DCT coefficients directly we can extract the mean color of a block and an approximate detail level. We can also perform an approximated cross-correlation between two blocks. The algorithm is part of a set of tools being developed to work with an automated asset management system designed specifically for use in post-production facilities.

  6. High Compressive Stresses Near the Surface of the Sierra Nevada, California

    Science.gov (United States)

    Martel, S. J.; Logan, J. M.; Stock, G. M.

    2012-12-01

    Observations and stress measurements in granitic rocks of the Sierra Nevada, California reveal strong compressive stresses parallel to the surface of the range at shallow depths. New overcoring measurements show high compressive stresses at three locations along an east-west transect through Yosemite National Park. At the westernmost site (west end of Tenaya Lake), the mean compressive stress is 1.9. At the middle site (north shore of Tenaya Lake) the mean compressive stress is 6.8 MPa. At the easternmost site (south side of Lembert Dome) the mean compressive stress is 3.0 MPa. The trend of the most compressive stress at these sites is within ~30° of the strike of the local topographic surface. Previously published hydraulic fracturing measurements by others elsewhere in the Sierra Nevada indicate surface-parallel compressive stresses of several MPa within several tens of meters of the surface, with the stress magnitudes generally diminishing to the west. Both the new and the previously published compressive stress magnitudes are consistent with the presence of sheeting joints (i.e., "exfoliation joints") in the Sierra Nevada, which require lateral compressive stresses of several MPa to form. These fractures are widespread: they are distributed in granitic rocks from the north end of the range to its southern tip and across the width of the range. Uplift along the normal faults of the eastern escarpment, recently measured by others at ~1-2 mm/yr, probably contributes to these stresses substantially. Geodetic surveys reveal that normal faulting flexes a range concave upwards in response to fault slip, and this flexure is predicted by elastic dislocation models. The topographic relief of the eastern escarpment of the Sierra Nevada is 2-4 km, and since alluvial fill generally buries the bedrock east of the faults, the offset of granitic rocks is at least that much. Compressive stresses of several MPa are predicted by elastic dislocation models of the range front

  7. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  8. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners.

    Science.gov (United States)

    Schwartz, Andrew H; Shinn-Cunningham, Barbara G

    2013-04-01

    Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.

  10. A higher chest compression rate may be necessary for metronome-guided cardiopulmonary resuscitation.

    Science.gov (United States)

    Chung, Tae Nyoung; Kim, Sun Wook; You, Je Sung; Cho, Young Soon; Chung, Sung Phil; Park, Incheol

    2012-01-01

    Metronome guidance is a simple and economical feedback system for guiding cardiopulmonary resuscitation (CPR). However, a recent study showed that metronome guidance reduced the depth of chest compression. The results of previous studies suggest that a higher chest compression rate is associated with a better CPR outcome as compared with a lower chest compression rate, irrespective of metronome use. Based on this finding, we hypothesized that a lower chest compression rate promotes a reduction in chest compression depth in the recent study rather than metronome use itself. One minute of chest compression-only CPR was performed following the metronome sound played at 1 of 4 different rates: 80, 100, 120, and 140 ticks/min. Average compression depths (ACDs) and duty cycles were compared using repeated measures analysis of variance, and the values in the absence and presence of metronome guidance were compared. Both the ACD and duty cycle increased when the metronome rate increased (P = .017, metronome rates of 80 and 100 ticks/min were significantly lower than those for the procedures without metronome guidance. The ACD and duty cyle for chest compression increase as the metronome rate increases during metronome-guided CPR. A higher rate of chest compression is necessary for metronome-guided CPR to prevent suboptimal quality of chest compression. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. The effect of breast compression on mass conspicuity in digital mammography

    International Nuclear Information System (INIS)

    Saunders, Robert S. Jr; Samei, Ehsan

    2008-01-01

    This study analyzed how the inherent quality of diagnostic information in digital mammography could be affected by breast compression. A digital mammography system was modeled using a Monte Carlo algorithm based on the Penelope program, which has been successfully used to model several medical imaging systems. First, the Monte Carlo program was validated against previous measurements and simulations. Once validated, the Monte Carlo software modeled a digital mammography system by tracking photons through a voxelized software breast phantom, containing anatomical structures and breast masses, and following photons until they were absorbed by a selenium-based flat-panel detector. Simulations were performed for two compression conditions (standard compression and 12.5% reduced compression) and three photon flux conditions (constant flux, constant detector signal, and constant glandular dose). The results showed that reduced compression led to higher scatter fractions, as expected. For the constant photon flux condition, decreased compression also reduced glandular dose. For constant glandular dose, the SdNR for a 4 cm breast was 0.60±0.11 and 0.62±0.11 under standard and reduced compressions, respectively. For the 6 cm case with constant glandular dose, the SdNR was 0.50±0.11 and 0.49±0.10 under standard and reduced compressions, respectively. The results suggest that if a particular imaging system can handle an approximately 10% increase in total tube output and 10% decrease in detector signal, breast compression can be reduced by about 12% in terms of breast thickness with little impact on image quality or dose.

  12. JPEG digital watermarking for copyright protection

    Directory of Open Access Journals (Sweden)

    Vitaliy G. Ivanenko

    2018-05-01

    Full Text Available With the rapid growth of the multimedia technology, copyright protection has become a very important issue, especially for images. The advantages of easy photo distribution are discarded by their possible theft and unauthorized usage on different websites. Therefore, there is a need in securing information with technical methods, for example digital watermarks. This paper reviews digital watermark embedding methods for image copyright protection, advantages and disadvantages of digital watermark usage are produced. Different watermarking algorithms are analyzed. Based on analysis results most effective algorithm is chosen – differential energy watermarking. It is noticed that the method excels at providing image integrity. Digital watermark embedding system should prevent illegal access to the digital watermark and its container. Requirements for digital watermark are produced. Possible image attacks are reviewed. Modern modifications of embedding algorithms are studied. Robustness of the differential energy watermark is investigated. Robustness is a special value, which formulae is given further in the article. DEW method modification is proposed, it’s advantages over original algorithm are described. Digital watermark serves as an additional layer of defense which is in most cases unknown to the violator. Scope of studied image attacks includes compression, filtration, scaling. In conclusion, it’s possible to use DEW watermarking in copyright protection, violator can easily be detected if images with embedded information are exchanged.

  13. Bronchoscopic guidance of endovascular stenting limits airway compression.

    Science.gov (United States)

    Ebrahim, Mohammad; Hagood, James; Moore, John; El-Said, Howaida

    2015-04-01

    Bronchial compression as a result of pulmonary artery and aortic arch stenting may cause significant respiratory distress. We set out to limit airway narrowing by endovascular stenting, by using simultaneous flexible bronchoscopy and graduated balloon stent dilatation, or balloon angioplasty to determine maximum safe stent diameter. Between August 2010 and August 2013, patients with suspected airway compression by adjacent vascular structures, underwent CT or a 3D rotational angiogram to evaluate the relationship between the airway and the blood vessels. If these studies showed close proximity of the stenosed vessel and the airway, simultaneous bronchoscopy and graduated stent re-dilation or graduated balloon angioplasty were performed. Five simultaneous bronchoscopy and interventional catheterization procedures were performed in four patients. Median age/weight was 33 (range 9-49) months and 14 (range 7.6-24) kg, respectively. Three had hypoplastic left heart syndrome, and one had coarctation of the aorta (CoA). All had confirmed or suspected left main stem bronchial compression. In three procedures, serial balloon dilatation of a previously placed stent in the CoA was performed and bronchoscopy was used to determine the safest largest diameter. In the other two procedures, balloon testing with simultaneous bronchoscopy was performed to determine the stent size that would limit compression of the adjacent airway. In all cases, simultaneous bronchoscopy allowed selection of an ideal caliber of the stent that optimized vessel diameter while minimizing compression of the adjacent airway. In cases at risk for airway compromise, flexible bronchoscopy is a useful tool to guide endovascular stenting. Maximum safe stent diameter can be determined without risking catastrophic airway compression. © 2014 Wiley Periodicals, Inc.

  14. A 6-year study of mammographic compression force: Practitioner variability within and between screening sites

    International Nuclear Information System (INIS)

    Mercer, Claire E.; Szczepura, Katy; Kelly, Judith; Millington, Sara R.; Denton, Erika R.E.; Borgen, Rita; Hilton, Beverley; Hogg, Peter

    2015-01-01

    Background: The application of compression force in mammography is more heavily influenced by the practitioner rather than the client. This can affect client experience, radiation dose and image quality. This research investigates practitioner compression force variation over a six year screening cycle in three different screening units. Methods: Data were collected from three consecutive screening events in three breast screening sites. Recorded data included: practitioner code, applied compression force (N), breast thickness (mm), BI-RADS ® density category. Exclusion criteria included: previous breast surgery, previous/ongoing assessment and breast implants. 975 clients (2925 client visits, 11,700 mammogram images) met inclusion criteria across three sites. Data analysis assessed practitioner and site variation of compression force and breast thickness. Results: Practitioners across three breast screening sites behave differently in the application of compression force. Two of the three sites demonstrate variability within themselves though they demonstrated no significant difference in mean, first and third quartile compression force and breast thickness values CC (p > 0.5), MLO (p > 0.1) between themselves. However, in the third site, where mandate dictates a minimum compression force is applied, greater consistency was demonstrated between practitioners and clients; a significant difference in mean, first and third quartile compression force and breast thickness values (p < 0.001) was demonstrated between this site and the other two sites. Conclusion: Variability within these two sites and between the three sites could result in variations. Stabilisation of these variations may have a positive impact on image quality, radiation dose reduction, re-attendance levels and potentially cancer detection. The large variation in compression forces could negatively impact on client experience between the units and within a unit. Further research is required to

  15. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest.

    Science.gov (United States)

    Monsieurs, Koenraad G; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F; Calle, Paul A

    2012-11-01

    BACKGROUND AND GOAL OF STUDY: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with decreased depth. In patients undergoing prehospital cardiopulmonary resuscitation by health care professionals, chest compression rate and depth were recorded using an accelerometer (E-series monitor-defibrillator, Zoll, U.S.A.). Compression depth was compared for rates 120/min. A difference in compression depth ≥0.5 cm was considered clinically significant. Mixed models with repeated measurements of chest compression depth and rate (level 1) nested within patients (level 2) were used with compression rate as a continuous and as a categorical predictor of depth. Results are reported as means and standard error (SE). One hundred and thirty-three consecutive patients were analysed (213,409 compressions). Of all compressions 2% were 120/min, 36% were 5 cm. In 77 out of 133 (58%) patients a statistically significant lower depth was observed for rates >120/min compared to rates 80-120/min, in 40 out of 133 (30%) this difference was also clinically significant. The mixed models predicted that the deepest compression (4.5 cm) occurred at a rate of 86/min, with progressively lower compression depths at higher rates. Rates >145/min would result in a depth compression depth for rates 80-120/min was on average 4.5 cm (SE 0.06) compared to 4.1 cm (SE 0.06) for compressions >120/min (mean difference 0.4 cm, Pcompression rates and lower compression depths. Avoiding excessive compression rates may lead to more compressions of sufficient depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Milky Way Past Was More Turbulent Than Previously Known

    Science.gov (United States)

    2004-04-01

    Results of 1001 observing nights shed new light on our Galaxy [1] Summary A team of astronomers from Denmark, Switzerland and Sweden [2] has achieved a major breakthrough in our understanding of the Milky Way, the galaxy in which we live. After more than 1,000 nights of observations spread over 15 years, they have determined the spatial motions of more than 14,000 solar-like stars residing in the neighbourhood of the Sun. For the first time, the changing dynamics of the Milky Way since its birth can now be studied in detail and with a stellar sample sufficiently large to allow a sound analysis. The astronomers find that our home galaxy has led a much more turbulent and chaotic life than previously assumed. PR Photo 10a/04: Distribution on the sky of the observed stars. PR Photo 10b/04: Stars in the solar neigbourhood and the Milky Way galaxy (artist's view). PR Video Clip 04/04: The motions of the observed stars during the past 250 million years. Unknown history Home is the place we know best. But not so in the Milky Way - the galaxy in which we live. Our knowledge of our nearest stellar neighbours has long been seriously incomplete and - worse - skewed by prejudice concerning their behaviour. Stars were generally selected for observation because they were thought to be "interesting" in some sense, not because they were typical. This has resulted in a biased view of the evolution of our Galaxy. The Milky Way started out just after the Big Bang as one or more diffuse blobs of gas of almost pure hydrogen and helium. With time, it assembled into the flattened spiral galaxy which we inhabit today. Meanwhile, generation after generation of stars were formed, including our Sun some 4,700 million years ago. But how did all this really happen? Was it a rapid process? Was it violent or calm? When were all the heavier elements formed? How did the Milky Way change its composition and shape with time? Answers to these and many other questions are 'hot' topics for the

  17. An efficient and extensible approach for compressing phylogenetic trees

    KAUST Repository

    Matthews, Suzanne J

    2011-01-01

    Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference.Results: On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings.Conclusions: TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community. © 2011 Matthews and Williams; licensee BioMed Central Ltd.

  18. [Symbol: see text]2 Optimized predictive image coding with [Symbol: see text]∞ bound.

    Science.gov (United States)

    Chuah, Sceuchin; Dumitrescu, Sorina; Wu, Xiaolin

    2013-12-01

    In many scientific, medical, and defense applications of image/video compression, an [Symbol: see text]∞ error bound is required. However, pure[Symbol: see text]∞-optimized image coding, colloquially known as near-lossless image coding, is prone to structured errors such as contours and speckles if the bit rate is not sufficiently high; moreover, most of the previous [Symbol: see text]∞-based image coding methods suffer from poor rate control. In contrast, the [Symbol: see text]2 error metric aims for average fidelity and hence preserves the subtlety of smooth waveforms better than the ∞ error metric and it offers fine granularity in rate control, but pure [Symbol: see text]2-based image coding methods (e.g., JPEG 2000) cannot bound individual errors as the [Symbol: see text]∞-based methods can. This paper presents a new compression approach to retain the benefits and circumvent the pitfalls of the two error metrics. A common approach of near-lossless image coding is to embed into a DPCM prediction loop a uniform scalar quantizer of residual errors. The said uniform scalar quantizer is replaced, in the proposed new approach, by a set of context-based [Symbol: see text]2-optimized quantizers. The optimization criterion is to minimize a weighted sum of the [Symbol: see text]2 distortion and the entropy while maintaining a strict [Symbol: see text]∞ error bound. The resulting method obtains good rate-distortion performance in both [Symbol: see text]2 and [Symbol: see text]∞ metrics and also increases the rate granularity. Compared with JPEG 2000, the new method not only guarantees lower [Symbol: see text]∞ error for all bit rates, but also it achieves higher PSNR for relatively high bit rates.

  19. Adiabatic compression and radiative compression of magnetic fields

    International Nuclear Information System (INIS)

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  20. Improving the Bearing Strength of Sandy Loam Soil Compressed Earth Block Bricks Using Sugercane Bagasse Ash

    Directory of Open Access Journals (Sweden)

    Ramadhan W. Salim

    2014-06-01

    Full Text Available The need for affordable and sustainable alternative construction materials to cement in developing countries cannot be underemphasized. Compressed Earth Bricks have gained acceptability as an affordable and sustainable construction material. There is however a need to boost its bearing capacity. Previous research show that Sugarcane Bagasse Ash as a soil stabilizer has yielded positive results. However, there is limited research on its effect on the mechanical property of Compressed Earth Brick. This current research investigated the effect of adding 3%, 5%, 8% and 10% Sugarcane Bagasse Ash on the compressive strength of compressed earth brick. The result showed improvement in its compressive strength by 65% with the addition of 10% Sugarcane Bagasse Ash.

  1. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  2. The Helioviewer Project: Browsing, Visualizing and Accessing Petabytes of Solar Data

    Science.gov (United States)

    Mueller, Daniel; Hughitt, V. K.; Langenberg, M.; Ireland, J.; Pagel, S.; Schmidt, L.; Garcia Ortiz, J. P.; Dimitoglou, G.; Fleck, B.

    2010-05-01

    After its successful launch, NASA's Solar Dynamics Observatory (SDO) will soon return more than 1 Terabyte worth of images per day. This unprecedented torrent of data will pose an entirely new set of challenges with respect to data access, data browsing and searching for interesting data while avoiding the proverbial search for "a needle in a haystack". In order to fully exploit SDO's wealth of data and connect it to data from other solar missions like SOHO, scientists need to be able to interactively browse and visualize many different data products spanning a large range of physical length and time scales. So far, all tools available to the scientific community either require downloading all potentially relevant data sets beforehand in their entirety or provide only movies with a fixed resolution and cadence. The Helioviewer project offers a solution to these challenges by providing a suite of tools that are based on the new JPEG 2000 compression standard and enable scientists and the general public alike to intuitively browse visualize and access petabytes of image data remotely: - JHelioviewer, a cross-platform application that offers movie streaming and real-time processing using the JPEG 2000 Interactive Protocol (JPIP) and OpenGL, as well as feature/event overlays. - helioviewer.org, a web-based image and feature/event browser. - Server-side services to stream movies of arbitrary spatial and temporal resolution in a region-of-interest and quality-progressive form, a JPEG 2000 image database and a feature/event server. All the services can be accessed through well-documented interfaces (APIs). - Code to convert images into JPEG 2000 format. This presentation will give an overview of the Helioviewer Project, illustrate new features and highlight the advantages of JPEG 2000 as a data format for solar physics that has the potential to revolutionize the way high-resolution image data are disseminated and analyzed.

  3. Compressed-air and backup nitrogen systems in nuclear power plants

    International Nuclear Information System (INIS)

    Hagen, E.W.

    1982-07-01

    This report reviews and evaluates the performance of the compressed-air and pressurized-nitrogen gas systems in commercial nuclear power units. The information was collected from readily available operating experiences, licensee event reports, system designs in safety analysis reports, and regulatory documents. The results are collated and analyzed for significance and impact on power plant safety performance. Under certain circumstances, the fail-safe philosophy for a piece of equipment or subsystem of the compressed-air systems initiated a series of actions culminating in reactor transient or unit scram. However, based on this study of prevailing operating experiences, reclassifying the compressed-gas systems to a higher safety level will neither prevent (nor mitigate) the reoccurrences of such happenings nor alleviate nuclear power plant problems caused by inadequate maintenance, operating procedures, and/or practices. Conversely, because most of the problems were derived from the sources listed previously, upgrading of both maintenance and operating procedures will not only result in substantial improvement in the performance and availability of the compressed-air (and backup nitrogen) systems but in improved overall plant performance

  4. Compression stockings

    Science.gov (United States)

    Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

  5. Brief compression-only cardiopulmonary resuscitation training video and simulation with homemade mannequin improves CPR skills.

    Science.gov (United States)

    Wanner, Gregory K; Osborne, Arayel; Greene, Charlotte H

    2016-11-29

    Cardiopulmonary resuscitation (CPR) training has traditionally involved classroom-based courses or, more recently, home-based video self-instruction. These methods typically require preparation and purchase fee; which can dissuade many potential bystanders from receiving training. This study aimed to evaluate the effectiveness of teaching compression-only CPR to previously untrained individuals using our 6-min online CPR training video and skills practice on a homemade mannequin, reproduced by viewers with commonly available items (towel, toilet paper roll, t-shirt). Participants viewed the training video and practiced with the homemade mannequin. This was a parallel-design study with pre and post training evaluations of CPR skills (compression rate, depth, hand position, release), and hands-off time (time without compressions). CPR skills were evaluated using a sensor-equipped mannequin and two blinded CPR experts observed testing of participants. Twenty-four participants were included: 12 never-trained and 12 currently certified in CPR. Comparing pre and post training, the never-trained group had improvements in average compression rate per minute (64.3 to 103.9, p = 0.006), compressions with correct hand position in 1 min (8.3 to 54.3, p = 0.002), and correct compression release in 1 min (21.2 to 76.3, p 100/min), but an improved number of compressions with correct release (53.5 to 94.7, p 50 mm) remained problematic in both groups. Comparisons made between groups indicated significant improvements in compression depth, hand position, and hands-off time in never-trained compared to CPR-certified participants. Inter-rater agreement values were also calculated between the CPR experts and sensor-equipped mannequin. A brief internet-based video coupled with skill practice on a homemade mannequin improved compression-only CPR skills, especially in the previously untrained participants. This training method allows for widespread compression-only CPR

  6. Using gasoline in an advanced compression ignition engine

    Energy Technology Data Exchange (ETDEWEB)

    Cracknell, R.F.; Ariztegui, J.; Dubois, T.; Hamje, H.D.C.; Pellegrini, L.; Rickeard, D.J.; Rose, K.D. [CONCAWE, Brussels (Belgium); Heuser, B. [RWTH Aachen Univ. (Germany). Inst. for Combustion Engines; Schnorbus, T.; Kolbeck, A.F. [FEV GmbH, Aachen (Germany)

    2013-06-01

    Future vehicles will be required to improve their efficiency, reduce both regulated and CO{sub 2} emissions, and maintain acceptable driveability, safety, and noise. To achieve this overall performance, they will be configured with more advanced hardware, sensors, and control technologies that will also enable their operation on a broader range of fuel properties. Fuel flexibility has already been demonstrated in previous studies on a compression ignition bench engine and a demonstration vehicle equipped with an advanced engine management system, closed-loop combustion control, and air-path control strategies. An unresolved question is whether engines of this sort can also operate on market gasoline while achieving diesel-like efficiency and acceptable emissions and noise levels. In this study, a compression ignition bench engine having a higher compression ratio, optimised valve timing, advanced engine management system, and flexible fuel injection could be operated on a European gasoline over full to medium part loads. The combustion was sensitive to EGR rates, however, and optimising all emissions and combustion noise was a considerable challenge at lower loads. (orig.)

  7. New procedures to evaluate visually lossless compression for display systems

    Science.gov (United States)

    Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim

    2017-09-01

    Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.

  8. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  9. Optimal image resolution for digital storage of radiotherapy-planning images

    International Nuclear Information System (INIS)

    Baba, Yuji; Furusawa, Mitsuhiro; Murakami, Ryuji; Baba, Takashi; Yokoyama, Toshimi; Nishimura, Ryuichi; Takahashi, Mutsumasa

    1998-01-01

    Purpose: To evaluate the quality of digitized radiation-planning images at different resolution and to determine the optimal resolution for digital storage. Methods and Materials: Twenty-five planning films were scanned and digitized using a film scanner at a resolution of 72 dots per inch (dpi) with 8-bit depth. The resolution of scanned images was reduced to 48, 36, 24, and 18 dpi using computer software. Image qualities of these five images (72, 48, 36, 24, and 18 dpi) were evaluated and given scores (4 = excellent; 3 = good; 2 = fair; and 1 = poor) by three radiation oncologists. An image data compression algorithm by the Joint Photographic Experts Group (JPEG) (not reversible and some information will be lost) was also evaluated. Results: The scores of digitized images with 72, 48, 36, 24, and 17 dpi resolution were 3.8 ± 0.3, 3.5 ± 0.3, 3.3 ± 0.5, 2.7 ± 0.5, and 1.6 ± 0.3, respectively. The quality of 36-dpi images were definitely worse compared to 72-dpi images, but were good enough as planning films. Digitized planning images with 72- and 36-dpi resolution requires about 800 and 200 KBytes, respectively. The JPEG compression algorithm produces little degradation in 36-dpi images at compression ratios of 5:1. Conclusion: The quality of digitized images with 36-dpi resolution was good enough as radiation-planning images and required 200 KBytes/image

  10. Numerically stable fluid–structure interactions between compressible flow and solid structures

    KAUST Repository

    Gré tarsson, Jó n Tó mas; Kwatra, Nipun; Fedkiw, Ronald

    2011-01-01

    ] which solves compressible fluid in a semi-implicit manner, solving for the advection part explicitly and then correcting the intermediate state to time tn+1 using an implicit pressure, obtained by solving a modified Poisson system. Similar to previous

  11. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  12. Laser driven single shock compression of fluid deuterium from 45 to 220 GPa

    Energy Technology Data Exchange (ETDEWEB)

    Hicks, D; Boehly, T; Celliers, P; Eggert, J; Moon, S; Meyerhofer, D; Collins, G

    2008-03-23

    The compression {eta} of liquid deuterium between 45 and 220 GPa under laser-driven shock loading has been measured using impedance matching to an aluminum (Al) standard. An Al impedance match model derived from a best fit to absolute Hugoniot data has been used to quantify and minimize the systematic errors caused by uncertainties in the high-pressure Al equation of state. In deuterium below 100 GPa results show that {eta} {approx_equal} 4.2, in agreement with previous impedance match data from magnetically-driven flyer and convergent-explosive shock wave experiments; between 100 and 220 GPa {eta} reaches a maximum of {approx}5.0, less than the 6-fold compression observed on the earliest laser-shock experiments but greater than expected from simple extrapolations of lower pressure data. Previous laser-driven double-shock results are found to be in good agreement with these single-shock measurements over the entire range under study. Both sets of laser-shock data indicate that deuterium undergoes an abrupt increase in compression at around 110 GPa.

  13. Magnetic field compression using pinch-plasma

    International Nuclear Information System (INIS)

    Koyama, K.; Tanimoto, M.; Matsumoto, Y.; Veno, I.

    1987-01-01

    In a previous report, the method for ultra-high magnetic field compression by using the pinchplasma was discussed. It is summarized as follows. The experiment is performed with the Mather-type plasma focus device tau/sub 1/4/ = 2 μs, I=880 kA at V=20 kV). An initial DC magnetic field is fed by an electromagnet embedded in the inner electrode. The axial component of the magnetic field diverges from the maximum field of 1 kG on the surface of the inner electrode. The density profile deduced from a Mach-Zehnder interferogram with a 2-ns N/sub 2/-laser shows a density dip lasting for 30 ns along the axes. Using the measured density of 8 x 10/sup 18/ cm/sup -3/, the temperature of 1.5 keV and the pressure balance relation, the magnitude of the trapped magnetic field is estimated to be 1.0 MG. The magnitude of the compressed magnetic field is also measured by Faraday rotation in a single-mode quartz fiber and a magnetic pickup soil. A protective polyethylene tube (3-mm o.d.) is used along the central axis through the inner electrode and the discharge chamber. The peak value of the compressed field range from 150 to 190 kG. No signal of the magnetic field appears up to the instance of the maximum pinch

  14. Permeability and compression characteristics of municipal solid waste samples

    Science.gov (United States)

    Durmusoglu, Ertan; Sanchez, Itza M.; Corapcioglu, M. Yavuz

    2006-08-01

    Four series of laboratory tests were conducted to evaluate the permeability and compression characteristics of municipal solid waste (MSW) samples. While the two series of tests were conducted using a conventional small-scale consolidometer, the two others were conducted in a large-scale consolidometer specially constructed for this study. In each consolidometer, the MSW samples were tested at two different moisture contents, i.e., original moisture content and field capacity. A scale effect between the two consolidometers with different sizes was investigated. The tests were carried out on samples reconsolidated to pressures of 123, 246, and 369 kPa. Time settlement data gathered from each load increment were employed to plot strain versus log-time graphs. The data acquired from the compression tests were used to back calculate primary and secondary compression indices. The consolidometers were later adapted for permeability experiments. The values of indices and the coefficient of compressibility for the MSW samples tested were within a relatively narrow range despite the size of the consolidometer and the different moisture contents of the specimens tested. The values of the coefficient of permeability were within a band of two orders of magnitude (10-6-10-4 m/s). The data presented in this paper agreed very well with the data reported by previous researchers. It was concluded that the scale effect in the compression behavior was significant. However, there was usually no linear relationship between the results obtained in the tests.

  15. Effectiveness of feedback with a smartwatch for high-quality chest compressions during adult cardiac arrest: A randomized controlled simulation study.

    Science.gov (United States)

    Ahn, Chiwon; Lee, Juncheol; Oh, Jaehoon; Song, Yeongtak; Chee, Youngjoon; Lim, Tae Ho; Kang, Hyunggoo; Shin, Hyungoo

    2017-01-01

    Previous studies have demonstrated the potential for using smartwatches with a built-in accelerometer as feedback devices for high-quality chest compression during cardiopulmonary resuscitation. However, to the best of our knowledge, no previous study has reported the effects of this feedback on chest compressions in action. A randomized, parallel controlled study of 40 senior medical students was conducted to examine the effect of chest compression feedback via a smartwatch during cardiopulmonary resuscitation of manikins. A feedback application was developed for the smartwatch, in which visual feedback was provided for chest compression depth and rate. Vibrations from smartwatch were used to indicate the chest compression rate. The participants were randomly allocated to the intervention and control groups, and they performed chest compressions on manikins for 2 min continuously with or without feedback, respectively. The proportion of accurate chest compression depth (≥5 cm and ≤6 cm) was assessed as the primary outcome, and the chest compression depth, chest compression rate, and the proportion of complete chest decompression (≤1 cm of residual leaning) were recorded as secondary outcomes. The proportion of accurate chest compression depth in the intervention group was significantly higher than that in the control group (64.6±7.8% versus 43.1±28.3%; p = 0.02). The mean compression depth and rate and the proportion of complete chest decompressions did not differ significantly between the two groups (all p>0.05). Cardiopulmonary resuscitation-related feedback via a smartwatch could provide assistance with respect to the ideal range of chest compression depth, and this can easily be applied to patients with out-of-hospital arrest by rescuers who wear smartwatches.

  16. Study on Compression Induced Contrast in X-ray Mammograms Using Breast Mimicking Phantoms

    Directory of Open Access Journals (Sweden)

    A. B. M. Aowlad Hossain

    2015-09-01

    Full Text Available X-ray mammography is commonly used to scan cancer or tumors in breast using low dose x-rays. But mammograms suffer from low contrast problem. The breast is compressed in mammography to reduce x-ray scattering effects. As tumors are stiffer than normal tissues, they undergo smaller deformation under compression. Therefore, image intensity at tumor region may change less than the background tissues. In this study, we try to find out compression induced contrast from multiple mammographic images of tumorous breast phantoms taken with different compressions. This is an extended work of our previous simulation study with experiment and more analysis. We have used FEM models for synthetic phantom and constructed a phantom using agar and n-propanol for simulation and experiment. The x-ray images of deformed phantoms have been obtained under three compression steps and a non-rigid registration technique has been applied to register these images. It is noticeably observed that the image intensity changes at tumor are less than those at surrounding which induce a detectable contrast. Addition of this compression induced contrast to the simulated and experimental images has improved their original contrast by a factor of about 1.4

  17. THE EFFECT OF A PELVIC COMPRESSION BELT ON FUNCTIONAL HAMSTRING MUSCLE ACTIVITY IN SPORTSMEN WITH AND WITHOUT PREVIOUS HAMSTRING INJURY.

    Science.gov (United States)

    Arumugam, Ashokan; Milosavljevic, Stephan; Woodley, Stephanie; Sole, Gisela

    2015-06-01

    There is evidence that applying a pelvic compression belt (PCB) can decrease hamstring and lumbar muscle electromyographic activity and increase gluteus maximus activity in healthy women during walking. Increased isokinetic eccentric hamstring strength in the terminal range (25 ° - 5 °) of knee extension has been reported with the use of such a belt in sportsmen with and without hamstring injuries. However, it is unknown whether wearing a pelvic belt alters activity of the hamstrings in sportsmen during walking. To examine the effects of wearing a PCB on electromyographic activity of the hamstring and lumbopelvic muscles during walking in sportsmen with and without hamstring injuries. Randomised crossover, cross-sectional study. Thirty uninjured sportsmen (23.53 ± 3.68 years) and 20 sportsmen with hamstring injuries (22.00 ± 1.45 years) sustained within the previous 12 months participated in this study. Electromyographic amplitudes of the hamstrings, gluteus maximus, gluteus medius and lumbar multifidus were monitored during defined phases of walking and normalised to maximum voluntary isometric contraction. Within-group comparisons [PCB vs. no PCB] for the normalised electromyographic amplitudes were performed for each muscle group using paired t tests. Electromyographic change scores [belt - no belt] were calculated and compared between the two groups with independent t tests. No significant change was evident in hamstring activity for either group while walking with the PCB (p > 0.050). However, with the PCB, gluteus medius activity (p ≤ 0.028) increased in both groups, while gluteus maximus activity increased (p = 0.025) and multifidus activity decreased (p hamstrings during walking, resulting in no significant changes within or between the two groups. Future studies investigating effects of the PCB on hamstring activity in participants with acute injury and during a more demanding functional activity such as running are warranted

  18. Soliton compression to few-cycle pulses with a high quality factor by engineering cascaded quadratic nonlinearities

    DEFF Research Database (Denmark)

    Zeng, Xianglong; Guo, Hairun; Zhou, Binbin

    2012-01-01

    We propose an efficient approach to improve few-cycle soliton compression with cascaded quadratic nonlinearities by using an engineered multi-section structure of the nonlinear crystal. By exploiting engineering of the cascaded quadratic nonlinearities, in each section soliton compression...... with a low effective order is realized, and high-quality few-cycle pulses with large compression factors are feasible. Each subsequent section is designed so that the compressed pulse exiting the previous section experiences an overall effective self-defocusing cubic nonlinearity corresponding to a modest...... soliton order, which is kept larger than unity to ensure further compression. This is done by increasing the cascaded quadratic nonlinearity in the new section with an engineered reduced residual phase mismatch. The low soliton orders in each section ensure excellent pulse quality and high efficiency...

  19. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  20. Magnetic compression/magnetized target fusion (MAGO/MTF)

    International Nuclear Information System (INIS)

    Kirkpatrick, R.C.; Lindemuth, I.R.

    1997-03-01

    Magnetized Target Fusion (MTF) was reported in two papers at the First Symposium on Current Trends in International Fusion Research. MTF is intermediate between two very different mainline approaches to fusion: Inertial Confinement Fusion (ICF) and magnetic confinement fusion (MCF). The only US MTF experiments in which a target plasma was compressed were the Sandia National Laboratory ''Phi targets''. Despite the very interesting results from that series of experiments, the research was not pursued, and other embodiments of MTF concept such as the Fast Liner were unable to attract the financial support needed for a firm proof of principle. A mapping of the parameter space for MTF showed the significant features of this approach. The All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) has an on-going interest in this approach to thermonuclear fusion, and Los Alamos National Laboratory (LANL) and VNIIEF have done joint target plasma generation experiments relevant to MTF referred to as MAGO (transliteration of the Russian acronym for magnetic compression). The MAGO II experiment appears to have achieved on the order of 200 eV and over 100 KG, so that adiabatic compression with a relatively small convergence could bring the plasma to fusion temperatures. In addition, there are other experiments being pursued for target plasma generation and proof of principle. This paper summarizes the previous reports on MTF and MAGO and presents the progress that has been made over the past three years in creating a target plasma that is suitable for compression to provide a scientific proof of principle experiment for MAGO/MTF

  1. Final Report 02-ERD-033: Rapid Resolidification of Metals using Dynamic Compression

    International Nuclear Information System (INIS)

    Streitz, F H; Nguyen, J H; Orlikowski, D; Minich, R; Moriarty, J A; Holmes, N C

    2005-01-01

    The purpose of this project is to develop a greater understanding of the kinetics involved during a liquid-solid phase transition occurring at high pressure and temperature. Kinetic limitations are known to play a large role in the dynamics of solidification at low temperatures, determining, e.g., whether a material crystallizes upon freezing or becomes an amorphous solid. The role of kinetics is not at all understood in transitions at high temperature when extreme pressures are involved. In order to investigate time scales during a dynamic compression experiment we needed to create an ability to alter the length of time spent by the sample in the transition region. Traditionally, the extreme high-pressure phase diagram is studied through a few static and dynamic techniques: static compression involving diamond anvil cells (DAC) [1], shock compression [2, 3], and quasi-isentropic compression [4, 5, 6, 7, 8, 9, 10]. Static DAC experiments explore equilibrium material properties along an isotherm or an isobar [1]. Dynamic material properties can be explored with shock compression [2, 3], probing single states on the Hugoniot, or with quasi-isentropic compression [4, 5, 6, 7, 8, 9, 10]. In the case of shocks, pressures variation typically occurs on a sub-nanosecond time scale or faster [11]. Previous quasi-isentropic techniques have yielded pressure ramps on the 10-100 nanosecond time-scale for samples that are several hundred microns thick [4, 5, 6, 7]. In order to understand kinetic effects at high temperatures and high pressures, we need to span a large dynamic range (strain rates, relaxation times, etc.) as well as control the thermodynamic path that the material experiences. Compression rates, for instance, need to bridge those of static experiments (seconds to hours) and those of the Z-accelerator (10 6 s -1 ) [4] or even laser ablation techniques (10 6 s -1 to 10 8 s -1 ) [7]. Here, we present a new technique that both extends the compression time to several

  2. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    Science.gov (United States)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  3. Investigation of the Radial Compression of Carbon Nanotubes with a Scanning Probe Microscope

    Science.gov (United States)

    Shen, Weidian; Jiang, Bin; Han, Bao Shan; Xie, Si-Shen

    2001-03-01

    Carbon nanotubes have attracted great interest since they were first synthesized. The tubes have substantial promise in a variety of applications due to their unique properties. Efforts have been made to characterize the mechanical properties of the tubes. However, previous work has concentrated on the tubes’ longitudinal properties, and studies of their radial properties lag behind. We have operated a scanning probe microscope, NanoScopeTM IIIa, in the indentation/scratching mode to carry out a nanoindentation test on the top of multiwalled carbon nanotubes. We measured the correlation between the radial stress and the tube compression, and thereby determined the radial compressive elastic modulus at different compressive forces. The measurements also allowed us to estimate the radial compressive strength of the tubes. Support of this work by an Eastern Michigan University Faculty Research Fellowship and by the K. C. Wong Education Foundation, Hong Kong is gratefully acknowledged.

  4. Bit-Grooming: Shave Your Bits with Razor-sharp Precision

    Science.gov (United States)

    Zender, C. S.; Silver, J.

    2017-12-01

    Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.

  5. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  6. Rupture of sigmoid colon caused by compressed air.

    Science.gov (United States)

    Yin, Wan-Bin; Hu, Ji-Lin; Gao, Yuan; Zhang, Xian-Xiang; Zhang, Mao-Shen; Liu, Guang-Wei; Zheng, Xue-Feng; Lu, Yun

    2016-03-14

    Compressed air has been generally used since the beginning of the 20(th) century for various applications. However, rupture of the colon caused by compressed air is uncommon. We report a case of pneumatic rupture of the sigmoid colon. The patient was admitted to the emergency room complaining of abdominal pain and distention. His colleague triggered a compressed air nozzle against his anus as a practical joke 2 h previously. On arrival, his pulse rate was 126 beats/min, respiratory rate was 42 breaths/min and blood pressure was 86/54 mmHg. Physical examination revealed peritoneal irritation and the abdomen was markedly distended. Computed tomography of the abdomen showed a large volume of air in the abdominal cavity. Peritoneocentesis was performed to relieve the tension pneumoperitoneum. Emergency laparotomy was done after controlling shock. Laparotomy revealed a 2-cm perforation in the sigmoid colon. The perforation was sutured and temporary ileostomy was performed as well as thorough drainage and irrigation of the abdominopelvic cavity. Reversal of ileostomy was performed successfully after 3 mo. Follow-up was uneventful. We also present a brief literature review.

  7. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  8. Optical pulse compression

    International Nuclear Information System (INIS)

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  9. Less is More: Bigger Data from Compressive Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew; Browning, Nigel D.

    2017-07-01

    Compressive sensing approaches are beginning to take hold in (scanning) transmission electron microscopy (S/TEM) [1,2,3]. Compressive sensing is a mathematical theory about acquiring signals in a compressed form (measurements) and the probability of recovering the original signal by solving an inverse problem [4]. The inverse problem is underdetermined (more unknowns than measurements), so it is not obvious that recovery is possible. Compression is achieved by taking inner products of the signal with measurement weight vectors. Both Gaussian random weights and Bernoulli (0,1) random weights form a large class of measurement vectors for which recovery is possible. The measurements can also be designed through an optimization process. The key insight for electron microscopists is that compressive sensing can be used to increase acquisition speed and reduce dose. Building on work initially developed for optical cameras, this new paradigm will allow electron microscopists to solve more problems in the engineering and life sciences. We will be collecting orders of magnitude more data than previously possible. The reason that we will have more data is because we will have increased temporal/spatial/spectral sampling rates, and we will be able ability to interrogate larger classes of samples that were previously too beam sensitive to survive the experiment. For example consider an in-situ experiment that takes 1 minute. With traditional sensing, we might collect 5 images per second for a total of 300 images. With compressive sensing, each of those 300 images can be expanded into 10 more images, making the collection rate 50 images per second, and the decompressed data a total of 3000 images [3]. But, what are the implications, in terms of data, for this new methodology? Acquisition of compressed data will require downstream reconstruction to be useful. The reconstructed data will be much larger than traditional data, we will need space to store the reconstructions during

  10. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    Science.gov (United States)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  11. On LSB Spatial Domain Steganography and Channel Capacity

    Science.gov (United States)

    2008-03-21

    reveal the hidden information should not be taken as proof that the image is now clean. The survivability of LSB type spatial domain steganography ...the mindset that JPEG compressing an image is sufficient to destroy the steganography for spatial domain LSB type stego. We agree that JPEGing...modeling of 2 bit LSB steganography shows that theoretically there is non-zero stego payload possible even though the image has been JPEGed. We wish to

  12. Practical implementation of a methodology for digital images authentication using forensics techniques

    OpenAIRE

    Francisco Rodríguez-Santos; Guillermo Delgado-Gutierréz; Leonardo Palacios-Luengas; Rubén Vázquez Medina

    2015-01-01

    This work presents a forensics analysis methodology implemented to detect modifications in JPEG digital images by analyzing the image’s metadata, thumbnail, camera traces and compression signatures. Best practices related with digital evidence and forensics analysis are considered to determine if the technical attributes and the qualities of an image are consistent with each other. This methodology is defined according to the recommendations of the Good Practice Guide for Computer-Based Elect...

  13. A novel method for fabrication of biodegradable scaffolds with high compression moduli

    NARCIS (Netherlands)

    DeGroot, JH; Kuijper, HW; Pennings, AJ

    1997-01-01

    It has been previously shown that, when used for meniscal reconstruction, porous copoly(L-lactide/epsilon-caprolactone) implants enhanced healing of meniscal lesions owing to their excellent adhesive properties. However, it appeared that the materials had an insufficient compression modulus to

  14. Block-based wavelet transform coding of mammograms with region-adaptive quantization

    Science.gov (United States)

    Moon, Nam Su; Song, Jun S.; Kwon, Musik; Kim, JongHyo; Lee, ChoongWoong

    1998-06-01

    To achieve both high compression ratio and information preserving, it is an efficient way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block- based wavelet transform coding is adopted and different peak- error-constrained quantizers are applied to blocks according to the segmentation result. In view of preservation of microcalcification, the proposed coding scheme shows better performance than JPEG.

  15. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  16. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  17. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    Science.gov (United States)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  18. Moving image compression and generalization capability of constructive neural networks

    Science.gov (United States)

    Ma, Liying; Khorasani, Khashayar

    2001-03-01

    To date numerous techniques have been proposed to compress digital images to ease their storage and transmission over communication channels. Recently, a number of image compression algorithms using Neural Networks NNs have been developed. Particularly, several constructive feed-forward neural networks FNNs have been proposed by researchers for image compression, and promising results have been reported. At the previous SPIE AeroSense conference 2000, we proposed to use a constructive One-Hidden-Layer Feedforward Neural Network OHL-FNN for compressing digital images. In this paper, we first investigate the generalization capability of the proposed OHL-FNN in the presence of additive noise for network training and/ or generalization. Extensive experimental results for different scenarios are presented. It is revealed that the constructive OHL-FNN is not as robust to additive noise in input image as expected. Next, the constructive OHL-FNN is applied to moving images, video sequences. The first, or other specified frame in a moving image sequence is used to train the network. The remaining moving images that follow are then generalized/compressed by this trained network. Three types of correlation-like criteria measuring the similarity of any two images are introduced. The relationship between the generalization capability of the constructed net and the similarity of images is investigated in some detail. It is shown that the constructive OHL-FNN is promising even for changing images such as those extracted from a football game.

  19. The Effect of Al on the Compressibility of Silicate Perovskite

    Science.gov (United States)

    Walter, M. J.; Kubo, A.; Yoshino, T.; Koga, K. T.; Ohishi, Y.

    2003-12-01

    Experimental data on compressibility of aluminous silicate perovskite show widely disparate results. Several studies show that Al causes a dramatic increase in compressibility1-3, while another study indicates a mild decrease in compressibility4. Here we report new results for the effect of Al on the room-temperature compressibility of perovskite using in situ X-ray diffraction in the diamond anvil cell from 30 to 100 GPa. We studied compressibility of perovskite in the system MgSiO3-Al2O3 in compositions with 0 to 25 mol% Al. Perovskite was synthesized from starting glasses using laser-heating in the DAC, with KBr as a pressure medium. Diffraction patterns were obtained using monochromatic radiation and an imaging plate detector at beamline BL10XU, SPring8, Japan. Addition of Al into the perovskite structure causes systematic increases in orthorhombic distortion and unit cell volume at ambient conditions (V0). Compression of the perovskite unit cell is anisotropic, with the a axis about 25% and 3% more compressive than the b and c axes, respectively. The magnitude of orthorhombic distortion increases with pressure, but aluminous perovskite remains stable to at least 100 GPa. Our results show that Al causes only a mild increase in compressibility, with the bulk modulus (K0) decreasing at a rate of 0.7 GPa/0.01 XAl. This increase in compressibility is consistent with recent ab initio calculations if Al mixes into both the 6- and 8-coordinated sites by coupled substitution5, where 2 Al3+ = Mg2+ + Si4+. Our results together with those of [4] indicate that this substitution mechanism predominates throughout the lower mantle. Previous mineralogic models indicating the upper and lower mantle are compositionally similar in terms of major elements remain effectively unchanged because solution of 5 mol% Al into perovskite has a minor effect on density. 1. Zhang & Weidner (1999). Science 284, 782-784. 2. Kubo et al. (2000) Proc. Jap. Acad. 76B, 103-107. 3. Daniel et al

  20. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    Science.gov (United States)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  1. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  2. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest

    NARCIS (Netherlands)

    Monsieurs, Koenraad G.; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F.; Calle, Paul A.

    2012-01-01

    Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

  3. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  4. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  5. Effects of texture on shear band formation in plane strain tension/compression and bending

    DEFF Research Database (Denmark)

    Kuroda, M.; Tvergaard, Viggo

    2007-01-01

    In this study, effects of typical texture components observed in rolled aluminum alloy sheets on shear band formation in plane strain tension/compression and bending are systematically studied. The material response is described by a generalized Taylor-type polycrystal model, in which each grain ...... shear band formation in bent specimens is compared to that in the tension/compression problem. Finally, the present results are compared to previous related studies, and the efficiency of the present method for materials design in future is discussed....

  6. Effective radiation attenuation calibration for breast density: compression thickness influences and correction

    Directory of Open Access Journals (Sweden)

    Thomas Jerry A

    2010-11-01

    Full Text Available Abstract Background Calibrating mammograms to produce a standardized breast density measurement for breast cancer risk analysis requires an accurate spatial measure of the compressed breast thickness. Thickness inaccuracies due to the nominal system readout value and compression paddle orientation induce unacceptable errors in the calibration. Method A thickness correction was developed and evaluated using a fully specified two-component surrogate breast model. A previously developed calibration approach based on effective radiation attenuation coefficient measurements was used in the analysis. Water and oil were used to construct phantoms to replicate the deformable properties of the breast. Phantoms consisting of measured proportions of water and oil were used to estimate calibration errors without correction, evaluate the thickness correction, and investigate the reproducibility of the various calibration representations under compression thickness variations. Results The average thickness uncertainty due to compression paddle warp was characterized to within 0.5 mm. The relative calibration error was reduced to 7% from 48-68% with the correction. The normalized effective radiation attenuation coefficient (planar representation was reproducible under intra-sample compression thickness variations compared with calibrated volume measures. Conclusion Incorporating this thickness correction into the rigid breast tissue equivalent calibration method should improve the calibration accuracy of mammograms for risk assessments using the reproducible planar calibration measure.

  7. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  8. Crystal structure of actinide metals at high compression

    International Nuclear Information System (INIS)

    Fast, L.; Soederlind, P.

    1995-08-01

    The crystal structures of some light actinide metals are studied theoretically as a function of applied pressure. The first principles electronic structure theory is formulated in the framework of density functional theory, with the gradient corrected local density approximation of the exchange-correlation functional. The light actinide metals are shown to be well described as itinerant (metallic) f-electron metals and generally, they display a crystal structure which have, in agreement with previous theoretical suggestions, increasing degree of symmetry and closed-packing upon compression. The theoretical calculations agree well with available experimental data. At very high compression, the theory predicts closed-packed structures such as the fcc or the hcp structures or the nearly closed-packed bcc structure for the light actinide metals. A simple canonical band picture is presented to explain in which particular closed-packed form these metals will crystallize at ultra-high pressure

  9. Hydrostatic compression of Fe(1-x)O wuestite

    Science.gov (United States)

    Jeanloz, R.; Sato-Sorensen, Y.

    1986-01-01

    Hydrostatic compression measurements on Fe(0.95)O wuestite up to 12 GPa yield a room temperature value for the isothermal bulk modulus of K(ot) = 157 (+ or - 10) GPa at zero pressure. This result is in accord with previous hydrostatic and nonhydrostatic measurements of K(ot) for wuestites of composition: 0.89 = Fe/O 0.95. Dynamic measurements of the bulk modulus by ultrasonic, shock-wave and neutron-scattering experiments tend to yield a larger value: K(ot) approximately 180 GPa. The discrepancy between static and dynamic values cannot be explained by the variation of K(ot) with composition, as has been proposed. This conclusion is based on high-precision compression data and on theoretical models of the effects of defects on elastic constants. Barring serious errors in the published measurements, the available data suggest that wuestite exhibits a volume relaxation under pressure.

  10. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  11. Comparison of the effectiveness of compression stockings and layer compression systems in venous ulceration treatment

    Science.gov (United States)

    Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina

    2010-01-01

    Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941

  12. Correlations between quality indexes of chest compression.

    Science.gov (United States)

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  13. Does the quality of chest compressions deteriorate when the chest compression rate is above 120/min?

    Science.gov (United States)

    Lee, Soo Hoon; Kim, Kyuseok; Lee, Jae Hyuk; Kim, Taeyun; Kang, Changwoo; Park, Chanjong; Kim, Joonghee; Jo, You Hwan; Rhee, Joong Eui; Kim, Dong Hoon

    2014-08-01

    The quality of chest compressions along with defibrillation is the cornerstone of cardiopulmonary resuscitation (CPR), which is known to improve the outcome of cardiac arrest. We aimed to investigate the relationship between the compression rate and other CPR quality parameters including compression depth and recoil. A conventional CPR training for lay rescuers was performed 2 weeks before the 'CPR contest'. CPR anytime training kits were distributed to respective participants for self-training on their own in their own time. The participants were tested for two-person CPR in pairs. The quantitative and qualitative data regarding the quality of CPR were collected from a standardised check list and SkillReporter, and compared by the compression rate. A total of 161 teams consisting of 322 students, which includes 116 men and 206 women, participated in the CPR contest. The mean depth and rate for chest compression were 49.0±8.2 mm and 110.2±10.2/min. Significantly deeper chest compression depths were noted at rates over 120/min than those at any other rates (47.0±7.4, 48.8±8.4, 52.3±6.7, p=0.008). Chest compression depth was proportional to chest compression rate (r=0.206, pcompression including chest compression depth and chest recoil by chest compression rate. Further evaluation regarding the upper limit of the chest compression rate is needed to ensure complete full chest wall recoil while maintaining an adequate chest compression depth. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager

    Science.gov (United States)

    Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza

    2012-01-01

    Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx

  15. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  16. Compressibility effects in packed and open tubular gas and supercritical fluid chromatography

    NARCIS (Netherlands)

    Janssen, J.G.M.; Snijders, H.M.J.; Cramers, C.A.; Schoenmakers, P.J.

    1992-01-01

    The influence of the pressure drop on the efficiency and speed of anal. in packed and open tubular supercrit. fluid chromatog. (SFC) is described: methods previously developed to describe the effects of mobile phase compressibility on the performance of open tubular columns in SFC have been extended

  17. Dependence of compressive strength of green compacts on pressure, density and contact area of powder particles

    International Nuclear Information System (INIS)

    Salam, A.; Akram, M.; Shahid, K.A.; Javed, M.; Zaidi, S.M.

    1994-08-01

    The relationship between green compressive strength and compacting pressure as well as green density has been investigated for uniaxially pressed aluminium powder compacts in the range 0 - 520 MPa. Two linear relationships occurred between compacting pressure and green compressive strength which corresponded to powder compaction stages II and III respectively, increase in strength being large during stage II and quite small in stage III with increasing pressure. On the basis of both, the experimental results and a previous model on cold compaction of powder particles, relationships between green compressive strength and green density and interparticle contact area of the compacts has been established. (author) 9 figs

  18. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    Science.gov (United States)

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  19. Modeling fibrous biological tissues with a general invariant that excludes compressed fibers

    Science.gov (United States)

    Li, Kewei; Ogden, Ray W.; Holzapfel, Gerhard A.

    2018-01-01

    Dispersed collagen fibers in fibrous soft biological tissues have a significant effect on the overall mechanical behavior of the tissues. Constitutive modeling of the detailed structure obtained by using advanced imaging modalities has been investigated extensively in the last decade. In particular, our group has previously proposed a fiber dispersion model based on a generalized structure tensor. However, the fiber tension-compression switch described in that study is unable to exclude compressed fibers within a dispersion and the model requires modification so as to avoid some unphysical effects. In a recent paper we have proposed a method which avoids such problems, but in this present study we introduce an alternative approach by using a new general invariant that only depends on the fibers under tension so that compressed fibers within a dispersion do not contribute to the strain-energy function. We then provide expressions for the associated Cauchy stress and elasticity tensors in a decoupled form. We have also implemented the proposed model in a finite element analysis program and illustrated the implementation with three representative examples: simple tension and compression, simple shear, and unconfined compression on articular cartilage. We have obtained very good agreement with the analytical solutions that are available for the first two examples. The third example shows the efficacy of the fibrous tissue model in a larger scale simulation. For comparison we also provide results for the three examples with the compressed fibers included, and the results are completely different. If the distribution of collagen fibers is such that it is appropriate to exclude compressed fibers then such a model should be adopted.

  20. Thermal analysis of near-isothermal compressed gas energy storage system

    International Nuclear Information System (INIS)

    Odukomaiya, Adewale; Abu-Heiba, Ahmad; Gluesenkamp, Kyle R.; Abdelaziz, Omar; Jackson, Roderick K.; Daniel, Claus; Graham, Samuel; Momen, Ayyoub M.

    2016-01-01

    Highlights: • A novel, high-efficiency, scalable, near-isothermal, energy storage system is introduced. • A comprehensive analytical physics-based model for the system is presented. • Efficiency improvement is achieved via heat transfer enhancement and use of waste heat. • Energy storage roundtrip efficiency (RTE) of 82% and energy density of 3.59 MJ/m"3 is shown. - Abstract: Due to the increasing generation capacity of intermittent renewable electricity sources and an electrical grid ill-equipped to handle the mismatch between electricity generation and use, the need for advanced energy storage technologies will continue to grow. Currently, pumped-storage hydroelectricity and compressed air energy storage are used for grid-scale energy storage, and batteries are used at smaller scales. However, prospects for expansion of these technologies suffer from geographic limitations (pumped-storage hydroelectricity and compressed air energy storage), low roundtrip efficiency (compressed air energy storage), and high cost (batteries). Furthermore, pumped-storage hydroelectricity and compressed air energy storage are challenging to scale-down, while batteries are challenging to scale-up. In 2015, a novel compressed gas energy storage prototype system was developed at Oak Ridge National Laboratory. In this paper, a near-isothermal modification to the system is proposed. In common with compressed air energy storage, the novel storage technology described in this paper is based on air compression/expansion. However, several novel features lead to near-isothermal processes, higher efficiency, greater system scalability, and the ability to site a system anywhere. The enabling features are utilization of hydraulic machines for expansion/compression, above-ground pressure vessels as the storage medium, spray cooling/heating, and waste-heat utilization. The base configuration of the novel storage system was introduced in a previous paper. This paper describes the results

  1. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  2. Effects of tensile and compressive stresses on irradiation-induced swelling in AISI 316

    International Nuclear Information System (INIS)

    Lauritzen, T.; Bell, W.L.; Konze, G.M.; Rosa, J.M.; Vaidyanathan, S.; Garner, F.A.

    1985-05-01

    The results of two recent experiments indicate that the current perception of stress-affected swelling needs revision. It appears that compressive stresses do not delay swelling as previously modeled but actually accelerate swelling at a rate comparable to that induced by tensile stresses

  3. A Novel Approach to Discovery and Access to Solar Data in the Petabyte Age

    Science.gov (United States)

    Mueller, Daniel; Dimitoglou, G.; Hughitt, V. K.; Ireland, J.; Wamsler, B.; Fleck, B.

    2009-05-01

    Space missions generate an ever-growing amount of data, as impressively highlighted by SDO's expected data rate of 1.4 TByte/day. In order to fully exploit their data, scientists need to be able to browse and visualize many different data products spanning a large range of physical length and time scales. So far, the tools available to the scientific community either require downloading all potentially relevant data sets beforehand in their entirety or provide only movies with a fixed resolution and cadence. To facilitate browsing and analysis of complex time-dependent data sets from multiple sources, we are developing JHelioviewer, a JPEG 2000-based visualization and discovery infrastructure for solar image data. Together with its web-based counterpart helioviewer.org, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and allows users to search related event data bases. The user interface for JHelioviewer is a multi-platform Java client that can both communicate with a remote server via the JPEG 2000 interactive protocol JPIP and open local data. The random code stream access of JPIP minimizes data transfer and can encapsulate meta data as well as multiple image channels in one data stream. This presentation will illustrate some of the features of JHelioviewer and the advantages of JPEG 2000 as a new data compression standard.

  4. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  5. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  6. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  7. Numerical study of the effects of carbon felt electrode compression in all-vanadium redox flow batteries

    International Nuclear Information System (INIS)

    Oh, Kyeongmin; Won, Seongyeon; Ju, Hyunchul

    2015-01-01

    Highlights: • The effects of electrode compression on VRFB are examined. • The electronic conductivity is improved when the compression is increased. • The kinetic losses are similar regardless of the electrode compression level. • The vanadium distribution is more uniform within highly compressed electrode. - Abstract: The porous carbon felt electrode is one of the major components of all-vanadium redox flow batteries (VRFBs). These electrodes are necessarily compressed during stack assembly to prevent liquid electrolyte leakage and diminish the interfacial contact resistance among VRFB stack components. The porous structure and properties of carbon felt electrodes have a considerable influence on the electrochemical reactions, transport features, and cell performance. Thus, a numerical study was performed herein to investigate the effects of electrode compression on the charge and discharge behavior of VRFBs. A three-dimensional, transient VRFB model developed in a previous study was employed to simulate VRFBs under two degrees of electrode compression (10% vs. 20%). The effects of electrode compression were precisely evaluated by analysis of the solid/electrolyte potential profiles, transfer current density, and vanadium concentration distributions, as well as the overall charge and discharge performance. The model predictions highlight the beneficial impact of electrode compression; the electronic conductivity of the carbon felt electrode is the main parameter improved by electrode compression, leading to reduction in ohmic loss through the electrodes. In contrast, the kinetics of the redox reactions and transport of vanadium species are not significantly altered by the degree of electrode compression (10% to 20%). This study enhances the understanding of electrode compression effects and demonstrates that the present VRFB model is a valuable tool for determining the optimal design and compression of carbon felt electrodes in VRFBs.

  8. Soil Compressibility Models for a Wide Stress Range

    KAUST Repository

    Chong, Song-Hun

    2016-03-03

    Soil compressibility models with physically correct asymptotic void ratios are required to analyze situations that involve a wide stress range. Previously suggested models and other functions are adapted to satisfy asymptotic void ratios at low and high stress levels; all updated models involve four parameters. Compiled consolidation data for remolded and natural clays are used to test the models and to develop correlations between model parameters and index properties. Models can adequately fit soil compression data for a wide range of stresses and soil types; in particular, models that involve the power of the stress σ\\'β display higher flexibility to capture the brittle response of some natural soils. The use of a single continuous function avoids numerical discontinuities or the need for ad hoc procedures to determine the yield stress. The tangent stiffness-readily computed for all models-should not be mistaken for the small-strain constant-fabric stiffness. © 2016 American Society of Civil Engineers.

  9. Buckling a Semiflexible Polymer Chain under Compression

    Directory of Open Access Journals (Sweden)

    Ekaterina Pilyugina

    2017-03-01

    Full Text Available Instability and structural transitions arise in many important problems involving dynamics at molecular length scales. Buckling of an elastic rod under a compressive load offers a useful general picture of such a transition. However, the existing theoretical description of buckling is applicable in the load response of macroscopic structures, only when fluctuations can be neglected, whereas membranes, polymer brushes, filaments, and macromolecular chains undergo considerable Brownian fluctuations. We analyze here the buckling of a fluctuating semiflexible polymer experiencing a compressive load. Previous works rely on approximations to the polymer statistics, resulting in a range of predictions for the buckling transition that disagree on whether fluctuations elevate or depress the critical buckling force. In contrast, our theory exploits exact results for the statistical behavior of the worm-like chain model yielding unambiguous predictions about the buckling conditions and nature of the buckling transition. We find that a fluctuating polymer under compressive load requires a larger force to buckle than an elastic rod in the absence of fluctuations. The nature of the buckling transition exhibits a marked change from being distinctly second order in the absence of fluctuations to being a more gradual, compliant transition in the presence of fluctuations. We analyze the thermodynamic contributions throughout the buckling transition to demonstrate that the chain entropy favors the extended state over the buckled state, providing a thermodynamic justification of the elevated buckling force.

  10. Compression Ratio Ion Mobility Programming (CRIMP) Accumulation and Compression of Billions of Ions for Ion Mobility-Mass Spectrometry Using Traveling Waves in Structures for Lossless Ion Manipulations (SLIM)

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Liulin; Garimella, Venkata BS; Hamid, Ahmed M.; Webb, Ian K.; Attah, Isaac K.; Norheim, Randolph V.; Prost, Spencer A.; Zheng, Xueyun; Sandoval, Jeremy A.; Baker, Erin M.; Ibrahim, Yehia M.; Smith, Richard D.

    2017-05-25

    We report on the implementation of a traveling wave (TW) based compression ratio ion mobility programming (CRIMP) approach within Structures for Lossless Ion Manipulations (SLIM) that enables both greatly enlarged trapped ion charge capacities and also their subsequent efficient compression for use in ion mobility (IM) separations. Ion accumulation is conducted in a long serpentine path TW SLIM region after which CRIMP allows the large ion populations to be ‘squeezed’. The compression process occurs at an interface between two SLIM regions, one operating conventionally and the second having an intermittently pausing or ‘stuttering’ TW, allowing the contents of multiple bins of ions from the first region to be merged into a single bin in the second region. In this initial work stationary voltages in the second region were used to block ions from exiting the first (trapping) region, and the resumption of TWs in the second region allows ions to exit, and the population to also be compressed if CRIMP is applied. In our initial evaluation we show that the number of charges trapped for a 40 s accumulation period was ~5×109, more than two orders of magnitude greater than the previously reported charge capacity using an ion funnel trap. We also show that over 1×109 ions can be accumulated with high efficiency in the present device, and that the extent of subsequent compression is only limited by the space charge capacity of the trapping region. Lower compression ratios allow increased IM peak heights without significant loss of signal, while excessively large compression ratios can lead to ion losses and other artifacts. Importantly, we show that extended ion accumulation in conjunction with CRIMP and multiple passes provides the basis for a highly desirable combination of ultra-high sensitivity and ultra-high resolution IM separations using SLIM.

  11. 29 CFR 1917.154 - Compressed air.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  12. Assessing diabetic retinopathy using two-field digital photography and the influence of JPEG-comoression

    NARCIS (Netherlands)

    Stellingwerf, C; Hardus, PLLJ; Hooymans, JMM

    Objective: To study the effectiveness of two digital 50degrees photographic fields per eye, stored compressed or integrally, in the grading of diabetic retinopathy, in comparison to 35-mm colour slides. Subjects and methods: Two-field digital non-stereoscopic retinal photographs and two-field 35-mm

  13. Lossy compression of quality scores in genomic data.

    Science.gov (United States)

    Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew

    2014-08-01

    Next-generation sequencing technologies are revolutionizing medicine. Data from sequencing technologies are typically represented as a string of bases, an associated sequence of per-base quality scores and other metadata, and in aggregate can require a large amount of space. The quality scores show how accurate the bases are with respect to the sequencing process, that is, how confident the sequencer is of having called them correctly, and are the largest component in datasets in which they are retained. Previous research has examined how to store sequences of bases effectively; here we add to that knowledge by examining methods for compressing quality scores. The quality values originate in a continuous domain, and so if a fidelity criterion is introduced, it is possible to introduce flexibility in the way these values are represented, allowing lossy compression over the quality score data. We present existing compression options for quality score data, and then introduce two new lossy techniques. Experiments measuring the trade-off between compression ratio and information loss are reported, including quantifying the effect of lossy representations on a downstream application that carries out single nucleotide polymorphism and insert/deletion detection. The new methods are demonstrably superior to other techniques when assessed against the spectrum of possible trade-offs between storage required and fidelity of representation. An implementation of the methods described here is available at https://github.com/rcanovas/libCSAM. rcanovas@student.unimelb.edu.au Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. On use of image quality metrics for perceptual blur modeling: image/video compression case

    Science.gov (United States)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  15. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  16. Compressibility of the protein-water interface

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-01

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in

  17. Compressibility of the protein-water interface.

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-07

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than

  18. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  19. EFFECTIVENESS OF ADJUVANT USE OF POSTERIOR MANUAL COMPRESSION WITH GRADED COMPRESSION IN THE SONOGRAPHIC DIAGNOSIS OF ACUTE APPENDICITIS

    Directory of Open Access Journals (Sweden)

    Senthilnathan V

    2018-01-01

    Full Text Available BACKGROUND Diagnosing appendicitis by Graded Compression Ultrasonogram is a difficult task because of limiting factors such as operator– dependent technique, retrocaecal location of the appendix and patient obesity. Posterior manual compression technique visualizes the appendix better in the Grey-scale Ultrasonogram. The Aim of this study is to determine the accuracy of ultrasound in detecting or excluding acute appendicitis and to evaluate the usefulness of the adjuvant use of posterior manual compression technique in visualization of the appendix and in the diagnosis of acute appendicitis MATERIALS AND METHODS This prospective study involved a total of 240 patients in all age groups and both sexes. All these patients underwent USG for suspected appendicitis. Ultrasonography was performed with transverse and longitudinal graded compression sonography. If the appendix is not visualized on graded compression sonography, posterior manual compression technique was used to further improve the detection of appendix. RESULTS The vermiform appendix was visualized in 185 patients (77.1% out of 240 patients with graded compression alone. 55 out of 240 patients whose appendix could not be visualized by graded compression alone were subjected to both graded followed by posterior manual compression technique among that Appendix was visualized in 43 patients on posterior manual compression technique amounting to 78.2% of cases, Appendix could not be visualized in the remaining 12 patients (21.8% out of 55. CONCLUSION Combined method of graded compression with posterior manual compression technique is better than the graded compression technique alone in diagnostic accuracy and detection rate of the vermiform appendix.

  20. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  1. Nonlinear viscoelasticity of pre-compressed layered polymeric composite under oscillatory compression

    KAUST Repository

    Xu, Yangguang

    2018-05-03

    Describing nonlinear viscoelastic properties of polymeric composites when subjected to dynamic loading is essential for development of practical applications of such materials. An efficient and easy method to analyze nonlinear viscoelasticity remains elusive because the dynamic moduli (storage modulus and loss modulus) are not very convenient when the material falls into nonlinear viscoelastic range. In this study, we utilize two methods, Fourier transform and geometrical nonlinear analysis, to quantitatively characterize the nonlinear viscoelasticity of a pre-compressed layered polymeric composite under oscillatory compression. We discuss the influences of pre-compression, dynamic loading, and the inner structure of polymeric composite on the nonlinear viscoelasticity. Furthermore, we reveal the nonlinear viscoelastic mechanism by combining with other experimental results from quasi-static compressive tests and microstructural analysis. From a methodology standpoint, it is proved that both Fourier transform and geometrical nonlinear analysis are efficient tools for analyzing the nonlinear viscoelasticity of a layered polymeric composite. From a material standpoint, we consequently posit that the dynamic nonlinear viscoelasticity of polymeric composites with complicated inner structures can also be well characterized using these methods.

  2. Effect of compressibility on the hypervelocity penetration

    Science.gov (United States)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  3. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  4. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  5. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    Directory of Open Access Journals (Sweden)

    Khairi Nor Asilah

    2017-01-01

    Full Text Available An Internet of Things (IoT device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  6. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    Science.gov (United States)

    Asilah Khairi, Nor; Bahari Jambek, Asral

    2017-11-01

    An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  7. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  8. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  9. Development of 1D Liner Compression Code for IDL

    Science.gov (United States)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  10. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh; Heidrich, Wolfgang

    2014-01-01

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  11. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  12. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  13. Compression experiments on the TOSKA tokamak

    International Nuclear Information System (INIS)

    Cima, G.; McGuire, K.M.; Robinson, D.C.; Wootton, A.J.

    1980-10-01

    Results from minor radius compression experiments on a tokamak plasma in TOSCA are reported. The compression is achieved by increasing the toroidal field up to twice its initial value in 200μs. Measurements show that particles and magnetic flux are conserved. When the initial energy confinement time is comparable with the compression time, energy gains are greater than for an adiabatic change of state. The total beta value increases. Central beta values approximately 3% are measured when a small major radius compression is superimposed on a minor radius compression. Magnetic field fluctuations are affected: both the amplitude and period decrease. Starting from low energy confinement times, approximately 200μs, increases in confinement times up to approximately 1 ms are measured. The increase in plasma energy results from a large reduction in the power losses during the compression. When the initial energy confinement time is much longer than the compression time, the parameter changes are those expected for an adiabatic change of state. (author)

  14. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  15. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  16. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  17. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  18. The Distinction of Hot Herbal Compress, Hot Compress, and Topical Diclofenac as Myofascial Pain Syndrome Treatment.

    Science.gov (United States)

    Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara

    2018-01-01

    This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.

  19. Compression of the digitized X-ray images

    International Nuclear Information System (INIS)

    Terae, Satoshi; Miyasaka, Kazuo; Fujita, Nobuyuki; Takamura, Akio; Irie, Goro; Inamura, Kiyonari.

    1987-01-01

    Medical images are using an increased amount of space in the hospitals, while they are not accessed easily. Thus, suitable data filing system and precise data compression will be necessitated. Image quality was evaluated before and after image data compression, using local filing system (MediFile 1000, NEC Co.) and forty-seven modes of compression parameter. For this study X-ray images of 10 plain radiographs and 7 contrast examinations were digitized using a film reader of CCD sensor in MediFile 1000. Those images were compressed into forty-seven kinds of image data to save in an optical disc and then the compressed images were reconstructed. Each reconstructed image was compared with non-compressed images in respect to several regions of our interest by four radiologists. Compression and extension of radiological images were promptly made by employing the local filing system. Image quality was much more affected by the ratio of data compression than by the mode of parameter itself. In another word, the higher compression ratio became, the worse the image quality were. However, image quality was not significantly degraded until the compression ratio was about 15: 1 on plain radiographs and about 8: 1 on contrast studies. Image compression by this technique will be admitted by diagnostic radiology. (author)

  20. Introduction to compressible fluid flow

    CERN Document Server

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  1. A comparative experimental study on engine operating on premixed charge compression ignition and compression ignition mode

    Directory of Open Access Journals (Sweden)

    Bhiogade Girish E.

    2017-01-01

    Full Text Available New combustion concepts have been recently developed with the purpose to tackle the problem of high emissions level of traditional direct injection Diesel engines. A good example is the premixed charge compression ignition combustion. A strategy in which early injection is used causing a burning process in which the fuel burns in the premixed condition. In compression ignition engines, soot (particulate matter and NOx emissions are an extremely unsolved issue. Premixed charge compression ignition is one of the most promising solutions that combine the advantages of both spark ignition and compression ignition combustion modes. It gives thermal efficiency close to the compression ignition engines and resolves the associated issues of high NOx and particulate matter, simultaneously. Premixing of air and fuel preparation is the challenging part to achieve premixed charge compression ignition combustion. In the present experimental study a diesel vaporizer is used to achieve premixed charge compression ignition combustion. A vaporized diesel fuel was mixed with the air to form premixed charge and inducted into the cylinder during the intake stroke. Low diesel volatility remains the main obstacle in preparing premixed air-fuel mixture. Exhaust gas re-circulation can be used to control the rate of heat release. The objective of this study is to reduce exhaust emission levels with maintaining thermal efficiency close to compression ignition engine.

  2. New Regenerative Cycle for Vapor Compression Refrigeration

    Energy Technology Data Exchange (ETDEWEB)

    Mark J. Bergander

    2005-08-29

    The main objective of this project is to confirm on a well-instrumented prototype the theoretically derived claims of higher efficiency and coefficient of performance for geothermal heat pumps based on a new regenerative thermodynamic cycle as comparing to existing technology. In order to demonstrate the improved performance of the prototype, it will be compared to published parameters of commercially available geothermal heat pumps manufactured by US and foreign companies. Other objectives are to optimize the design parameters and to determine the economic viability of the new technology. Background (as stated in the proposal): The proposed technology closely relates to EERE mission by improving energy efficiency, bringing clean, reliable and affordable heating and cooling to the residential and commercial buildings and reducing greenhouse gases emission. It can provide the same amount of heating and cooling with considerably less use of electrical energy and consequently has a potential of reducing our nations dependence on foreign oil. The theoretical basis for the proposed thermodynamic cycle was previously developed and was originally called a dynamic equilibrium method. This theory considers the dynamic equations of state of the working fluid and proposes the methods for modification of T-S trajectories of adiabatic transformation by changing dynamic properties of gas, such as flow rate, speed and acceleration. The substance of this proposal is a thermodynamic cycle characterized by the regenerative use of the potential energy of two-phase flow expansion, which in traditional systems is lost in expansion valves. The essential new features of the process are: (1) The application of two-step throttling of the working fluid and two-step compression of its vapor phase. (2) Use of a compressor as the initial step compression and a jet device as a second step, where throttling and compression are combined. (3) Controlled ratio of a working fluid at the first and

  3. Effects of compressibility and heating in magnetohydrodynamics simulations of a reversed field pinch

    International Nuclear Information System (INIS)

    Onofri, M.; Malara, F.; Veltri, P.

    2009-01-01

    The reversed field pinch is studied using numerical simulations of the compressible magnetohydrodynamics equations. Contrary to what has been done in previous works, the hypotheses of constant density and vanishing pressure are not used. Two cases are investigated. In the first case the pressure is derived from an adiabatic condition and in the second case the pressure equation includes heating terms due to resistivity and viscosity. The evolution of the reversal parameter and the production of single helicity or multiple helicity states are different in the two cases. The simulations show that the results are affected by compressibility and are very sensitive to hypotheses on heat production.

  4. Pulsed Compression Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Roestenberg, T. [University of Twente, Enschede (Netherlands)

    2012-06-07

    The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

  5. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    Science.gov (United States)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  6. Intractable vomiting caused by vertebral artery compressing the medulla: A case report

    Directory of Open Access Journals (Sweden)

    Lauren Gorton

    2015-01-01

    Full Text Available Vertebral artery compressing the medulla and causing intractable vomiting has only been reported once previously. We report a case of a 69-year-old woman with intractable nausea and vomiting causing a 50 pound weight loss and who failed medical management and whose symptoms were completely reversed following microvascular decompression (MVD.

  7. Fast H.264/AVC FRExt intra coding using belief propagation.

    Science.gov (United States)

    Milani, Simone

    2011-01-01

    In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.

  8. A Huge Capital Drop with Compression of Femoral Vessels Associated with Hip Osteoarthritis

    Directory of Open Access Journals (Sweden)

    Tomoya Takasago

    2015-01-01

    Full Text Available A capital drop is a type of osteophyte at the inferomedial portion of the femoral head commonly observed in hip osteoarthritis (OA, secondary to developmental dysplasia. Capital drop itself is typically asymptomatic; however, symptoms can appear secondary to impinge against the acetabulum or to irritation of the surrounding tissues, such as nerves, vessels, and tendons. We present here a case of unilateral leg edema in a patient with hip OA, caused by a huge bone mass occurring at the inferomedial portion of the femoral head that compressed the femoral vessels. We diagnosed this bone mass as a capital drop secondary to hip OA after confirming that the mass occurred at least after the age of 63 years based on a previous X-ray. We performed early resection and total hip arthroplasty since the patient’s hip pain was due to both advanced hip OA and compression of the femoral vessels; moreover, we aimed to prevent venous thrombosis secondary to vascular compression considering the advanced age and the potent risk of thrombosis in the patient. A large capital drop should be considered as a cause of vascular compression in cases of unilateral leg edema in OA patients.

  9. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  10. Compressive sampling by artificial neural networks for video

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt

    2011-06-01

    We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.

  11. Simulating coupled dynamics of a rigid-flexible multibody system and compressible fluid

    Science.gov (United States)

    Hu, Wei; Tian, Qiang; Hu, HaiYan

    2018-04-01

    As a subsequent work of previous studies of authors, a new parallel computation approach is proposed to simulate the coupled dynamics of a rigid-flexible multibody system and compressible fluid. In this approach, the smoothed particle hydrodynamics (SPH) method is used to model the compressible fluid, the natural coordinate formulation (NCF) and absolute nodal coordinate formulation (ANCF) are used to model the rigid and flexible bodies, respectively. In order to model the compressible fluid properly and efficiently via SPH method, three measures are taken as follows. The first is to use the Riemann solver to cope with the fluid compressibility, the second is to define virtual particles of SPH to model the dynamic interaction between the fluid and the multibody system, and the third is to impose the boundary conditions of periodical inflow and outflow to reduce the number of SPH particles involved in the computation process. Afterwards, a parallel computation strategy is proposed based on the graphics processing unit (GPU) to detect the neighboring SPH particles and to solve the dynamic equations of SPH particles in order to improve the computation efficiency. Meanwhile, the generalized-alpha algorithm is used to solve the dynamic equations of the multibody system. Finally, four case studies are given to validate the proposed parallel computation approach.

  12. VLSI ARCHITECTURE FOR IMAGE COMPRESSION THROUGH ADDER MINIMIZATION TECHNIQUE AT DCT STRUCTURE

    Directory of Open Access Journals (Sweden)

    N.R. Divya

    2014-08-01

    Full Text Available Data compression plays a vital role in multimedia devices to present the information in a succinct frame. Initially, the DCT structure is used for Image compression, which has lesser complexity and area efficient. Similarly, 2D DCT also has provided reasonable data compression, but implementation concern, it calls more multipliers and adders thus its lead to acquire more area and high power consumption. To contain an account of all, this paper has been dealt with VLSI architecture for image compression using Rom free DA based DCT (Discrete Cosine Transform structure. This technique provides high-throughput and most suitable for real-time implementation. In order to achieve this image matrix is subdivided into odd and even terms then the multiplication functions are removed by shift and add approach. Kogge_Stone_Adder techniques are proposed for obtaining a bit-wise image quality which determines the new trade-off levels as compared to the previous techniques. Overall the proposed architecture produces reduced memory, low power consumption and high throughput. MATLAB is used as a funding tool for receiving an input pixel and obtaining output image. Verilog HDL is used for implementing the design, Model Sim for simulation, Quatres II is used to synthesize and obtain details about power and area.

  13. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  14. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  15. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  16. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  17. Study of CSR longitudinal bunch compression cavity

    International Nuclear Information System (INIS)

    Yin Dayu; Li Peng; Liu Yong; Xie Qingchun

    2009-01-01

    The scheme of longitudinal bunch compression cavity for the Cooling Storage Ring (CSR)is an important issue. Plasma physics experiments require high density heavy ion beam and short pulsed bunch,which can be produced by non-adiabatic compression of bunch implemented by a fast compression with 90 degree rotation in the longitudinal phase space. The phase space rotation in fast compression is initiated by a fast jump of the RF-voltage amplitude. For this purpose, the CSR longitudinal bunch compression cavity, loaded with FINEMET-FT-1M is studied and simulated with MAFIA code. In this paper, the CSR longitudinal bunch compression cavity is simulated and the initial bunch length of 238 U 72+ with 250 MeV/u will be compressed from 200 ns to 50 ns.The construction and RF properties of the CSR longitudinal bunch compression cavity are simulated and calculated also with MAFIA code. The operation frequency of the cavity is 1.15 MHz with peak voltage of 80 kV, and the cavity can be used to compress heavy ions in the CSR. (authors)

  18. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  19. Fuel octane effects in the partially premixed combustion regime in compression ignition engines

    NARCIS (Netherlands)

    Hildingsson, L.; Kalghatgi, G.T.; Tait, N.; Johansson, B.H.; Harrison, A.

    2009-01-01

    Previous work has showed that it may be advantageous to use fuels of lower cetane numbers compared to today's diesel fuels in compression ignition engines. The benefits come from the longer ignition delays that these fuels have. There is more time available for the fuel and air to mix before

  20. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  1. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  2. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  3. Limiting density ratios in piston-driven compressions

    International Nuclear Information System (INIS)

    Lee, S.

    1985-07-01

    By using global energy and pressure balance applied to a shock model it is shown that for a piston-driven fast compression, the maximum compression ratio is not dependent on the absolute magnitude of the piston power, but rather on the power pulse shape. Specific cases are considered and a maximum density compression ratio of 27 is obtained for a square-pulse power compressing a spherical pellet with specific heat ratio of 5/3. Double pulsing enhances the density compression ratio to 1750 in the case of linearly rising compression pulses. Using this method further enhancement by multiple pulsing becomes obvious. (author)

  4. Toward compression of small cell population: harnessing stress in passive regions of dielectric elastomer actuators

    Science.gov (United States)

    Poulin, Alexandre; Rosset, Samuel; Shea, Herbert

    2014-03-01

    We present a dielectric elastomer actuator (DEA) for in vitro analysis of mm2 biological samples under periodic compressive stress. Understanding how mechanical stimuli affect cell functions could lead to significant advances in diseases diagnosis and drugs development. We previously reported an array of 72 micro-DEAs on a chip to apply a periodic stretch to cells. To diversify our cell mechanotransduction toolkit we have developed an actuator for periodic compression of small cell populations. The device is based on a novel design which exploits the effects of non-equibiaxial pre-stretch and takes advantage of the stress induced in passive regions of DEAs. The device consists of two active regions separated by a 2mm x 2mm passive area. When connected to an AC high-voltage source, the two active regions periodically compress the passive region. Due to the non-equibiaxial pre-stretch it induces uniaxial compressive strain greater than 10%. Cells adsorbed on top of this passive gap would experience the same uniaxial compressive stain. The electrodes configuration confines the electric field and prevents it from reaching the biological sample. A thin layer of silicone is casted on top of the device to ensure a biocompatible environment. This design provides several advantages over alternative technologies such as high optical transparency of the area of interest (passive region under compression) and its potential for miniaturization and parallelization.

  5. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2013-01-01

    Compressibility, Turbulence and High Speed Flow introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range, through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. The book provides the reader with the necessary background and current trends in the theoretical and experimental aspects of compressible turbulent flows and compressible turbulence. Detailed derivations of the pertinent equations describing the motion of such turbulent flows is provided and an extensive discussion of the various approaches used in predicting both free shear and wall bounded flows is presented. Experimental measurement techniques common to the compressible flow regime are introduced with particular emphasis on the unique challenges presented by high speed flows. Both experimental and numerical simulation work is supplied throughout to provide the reader with an overall perspective of current tre...

  6. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  7. [A brief history of resuscitation - the influence of previous experience on modern techniques and methods].

    Science.gov (United States)

    Kucmin, Tomasz; Płowaś-Goral, Małgorzata; Nogalski, Adam

    2015-02-01

    Cardiopulmonary resuscitation (CPR) is relatively novel branch of medical science, however first descriptions of mouth-to-mouth ventilation are to be found in the Bible and literature is full of descriptions of different resuscitation methods - from flagellation and ventilation with bellows through hanging the victims upside down and compressing the chest in order to stimulate ventilation to rectal fumigation with tobacco smoke. The modern history of CPR starts with Kouwenhoven et al. who in 1960 published a paper regarding heart massage through chest compressions. Shortly after that in 1961Peter Safar presented a paradigm promoting opening the airway, performing rescue breaths and chest compressions. First CPR guidelines were published in 1966. Since that time guidelines were modified and improved numerously by two leading world expert organizations ERC (European Resuscitation Council) and AHA (American Heart Association) and published in a new version every 5 years. Currently 2010 guidelines should be obliged. In this paper authors made an attempt to present history of development of resuscitation techniques and methods and assess the influence of previous lifesaving methods on nowadays technologies, equipment and guidelines which allow to help those women and men whose life is in danger due to sudden cardiac arrest. © 2015 MEDPRESS.

  8. 30 CFR 77.412 - Compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  9. Two divergent paths: compression vs. non-compression in deep venous thrombosis and post thrombotic syndrome

    Directory of Open Access Journals (Sweden)

    Eduardo Simões Da Matta

    Full Text Available Abstract Use of compression therapy to reduce the incidence of postthrombotic syndrome among patients with deep venous thrombosis is a controversial subject and there is no consensus on use of elastic versus inelastic compression, or on the levels and duration of compression. Inelastic devices with a higher static stiffness index, combine relatively small and comfortable pressure at rest with pressure while standing strong enough to restore the “valve mechanism” generated by plantar flexion and dorsiflexion of the foot. Since the static stiffness index is dependent on the rigidity of the compression system and the muscle strength within the bandaged area, improvement of muscle mass with muscle-strengthening programs and endurance training should be encouraged. Therefore, in the acute phase of deep venous thrombosis events, anticoagulation combined with inelastic compression therapy can reduce the extension of the thrombus. Notwithstanding, prospective studies evaluating the effectiveness of inelastic therapy in deep venous thrombosis and post-thrombotic syndrome are needed.

  10. Superfluid compressibility and the inertial mass of a moving singularity

    International Nuclear Information System (INIS)

    Duan, J.

    1993-01-01

    The concept of finite compressibility of a Fermi superfluid is used to reconsider the problem of inertial mass of vortex lines in both neutral and charged superfluids at T=0. For the charged case, in contrast to previous works where perfect screening was assumed, we take proper account of electromagnetic screening and solve the bulk charge distribution caused by a moving vortex line. A similar problem for a superconducting thin film is also considered

  11. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  12. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  13. Evaluation of the Behavior of Technova Corporation Rod-Stiffened Stitched Compression Specimens

    Science.gov (United States)

    Jegley, Dawn C.

    2013-01-01

    Under Space Act Agreement 1347 between NASA and Technova Corporation, Technova designed and fabricated two carbon-epoxy crippling specimens and NASA loaded them to failure in axial compression. Each specimen contained a pultruded rod stiffener which was held to the specimen skin with through-the-thickness stitches. One of these specimens was designed to be nominally the same as pultruded rod stitched specimens fabricated by Boeing under previous programs. In the other specimen, the rod was prestressed in a Technova manufacturing process to increase its ability to carrying compressive loading. Experimental results demonstrated that the specimen without prestressing carried approximately the same load as the similar Boeing specimens and that the specimen with prestressing carried significantly more load than the specimen without prestressing.

  14. Time-resolved shock compression of porous rutile: Wave dispersion in porous solids

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, M.U.; Graham, R.A.; Holman, G.T.

    1993-08-01

    Rutile (TiO{sub 2}) samples at 60% of solid density have been shock-loaded from 0.21 to 6.1 GPa with sample thickness of 4 mm and studied with the PVDF piezoelectric polymer stress-rate gauge. The technique uses a copper capsule to contain the sample which has PVDF gauge packages in direct contact with front and rear surfaces. A precise measure is made of the compressive stress wave velocity through the sample, as well as the input and propagated shock stress. Initial density is known from sample preparation, and the amount of shock-compression is calculated from the measurement of shock velocity and input stress. Shock states and re-shock states are measured. Observed data are consistent with previously published high pressure data. It is observed that rutile has a ``crush strength`` near 6 GPa. Propagated stress-pulse rise times vary from 234 to 916 nsec. Propagated stress-pulse rise times of shock-compressed HMX, 2Al + Fe{sub 2}O{sub 3}, 3Ni + Al, and 5Ti + 3Si are presented.

  15. Rate-independent dissipation and loading direction effects in compressed carbon nanotube arrays

    International Nuclear Information System (INIS)

    Raney, J R; Fraternali, F; Daraio, C

    2013-01-01

    Arrays of nominally-aligned carbon nanotubes (CNTs) under compression deform locally via buckling, exhibit a foam-like, dissipative response, and can often recover most of their original height. We synthesize millimeter-scale CNT arrays and report the results of compression experiments at different strain rates, from 10 −4 to 10 −1 s −1 , and for multiple compressive cycles to different strains. We observe that the stress–strain response proceeds independently of the strain rate for all tests, but that it is highly dependent on loading history. Additionally, we examine the effect of loading direction on the mechanical response of the system. The mechanical behavior is modeled using a multiscale series of bistable springs. This model captures the rate independence of the constitutive response, the local deformation, and the history-dependent effects. We develop here a macroscopic formulation of the model to represent a continuum limit of the mesoscale elements developed previously. Utilizing the model and our experimental observations we discuss various possible physical mechanisms contributing to the system’s dissipative response. (paper)

  16. The compressed breast during mammography and breast tomosynthesis: in vivo shape characterization and modeling

    Science.gov (United States)

    Rodríguez-Ruiz, Alejandro; Agasthya, Greeshma A.; Sechopoulos, Ioannis

    2017-09-01

    To characterize and develop a patient-based 3D model of the compressed breast undergoing mammography and breast tomosynthesis. During this IRB-approved, HIPAA-compliant study, 50 women were recruited to undergo 3D breast surface imaging with structured light (SL) during breast compression, along with simultaneous acquisition of a tomosynthesis image. A pair of SL systems were used to acquire 3D surface images by projecting 24 different patterns onto the compressed breast and capturing their reflection off the breast surface in approximately 12-16 s. The 3D surface was characterized and modeled via principal component analysis. The resulting surface model was combined with a previously developed 2D model of projected compressed breast shapes to generate a full 3D model. Data from ten patients were discarded due to technical problems during image acquisition. The maximum breast thickness (found at the chest-wall) had an average value of 56 mm, and decreased 13% towards the nipple (breast tilt angle of 5.2°). The portion of the breast not in contact with the compression paddle or the support table extended on average 17 mm, 18% of the chest-wall to nipple distance. The outermost point along the breast surface lies below the midline of the total thickness. A complete 3D model of compressed breast shapes was created and implemented as a software application available for download, capable of generating new random realistic 3D shapes of breasts undergoing compression. Accurate characterization and modeling of the breast curvature and shape was achieved and will be used for various image processing and clinical tasks.

  17. Influence of chest compression artefact on capnogram-based ventilation detection during out-of-hospital cardiopulmonary resuscitation.

    Science.gov (United States)

    Leturiondo, Mikel; Ruiz de Gauna, Sofía; Ruiz, Jesus M; Julio Gutiérrez, J; Leturiondo, Luis A; González-Otero, Digna M; Russell, James K; Zive, Dana; Daya, Mohamud

    2018-03-01

    Capnography has been proposed as a method for monitoring the ventilation rate during cardiopulmonary resuscitation (CPR). A high incidence (above 70%) of capnograms distorted by chest compression induced oscillations has been previously reported in out-of-hospital (OOH) CPR. The aim of the study was to better characterize the chest compression artefact and to evaluate its influence on the performance of a capnogram-based ventilation detector during OOH CPR. Data from the MRx monitor-defibrillator were extracted from OOH cardiac arrest episodes. For each episode, presence of chest compression artefact was annotated in the capnogram. Concurrent compression depth and transthoracic impedance signals were used to identify chest compressions and to annotate ventilations, respectively. We designed a capnogram-based ventilation detection algorithm and tested its performance with clean and distorted episodes. Data were collected from 232 episodes comprising 52 654 ventilations, with a mean (±SD) of 227 (±118) per episode. Overall, 42% of the capnograms were distorted. Presence of chest compression artefact degraded algorithm performance in terms of ventilation detection, estimation of ventilation rate, and the ability to detect hyperventilation. Capnogram-based ventilation detection during CPR using our algorithm was compromised by the presence of chest compression artefact. In particular, artefact spanning from the plateau to the baseline strongly degraded ventilation detection, and caused a high number of false hyperventilation alarms. Further research is needed to reduce the impact of chest compression artefact on capnographic ventilation monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Medullary compression syndrome

    International Nuclear Information System (INIS)

    Barriga T, L.; Echegaray, A.; Zaharia, M.; Pinillos A, L.; Moscol, A.; Barriga T, O.; Heredia Z, A.

    1994-01-01

    The authors made a retrospective study in 105 patients treated in the Radiotherapy Department of the National Institute of Neoplasmic Diseases from 1973 to 1992. The objective of this evaluation was to determine the influence of radiotherapy in patients with medullary compression syndrome in aspects concerning pain palliation and improvement of functional impairment. Treatment sheets of patients with medullary compression were revised: 32 out of 39 of patients (82%) came to hospital by their own means and continued walking after treatment, 8 out of 66 patients (12%) who came in a wheelchair or were bedridden, could mobilize by their own after treatment, 41 patients (64%) had partial alleviation of pain after treatment. In those who came by their own means and did not change their characteristics, functional improvement was observed. It is concluded that radiotherapy offers palliative benefit in patients with medullary compression syndrome. (authors). 20 refs., 5 figs., 6 tabs

  19. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation.

    Science.gov (United States)

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-07-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects were positioned in a 30° lateral recumbent position, and a 2-kgf compression was applied. For expiratory rib cage compression, the rib cage was compressed unilaterally; for expiratory abdominal compression, the area directly above the navel was compressed. Tidal volume values were the actual measured values divided by body weight. [Results] Tidal volume values were as follows: at rest, 7.2 ± 1.7 mL/kg; during expiratory rib cage compression, 8.3 ± 2.1 mL/kg; during expiratory abdominal compression, 9.1 ± 2.2 mL/kg. There was a significant difference between the tidal volume during expiratory abdominal compression and that at rest. The tidal volume in expiratory rib cage compression was strongly correlated with that in expiratory abdominal compression. [Conclusion] These results indicate that expiratory abdominal compression may be an effective alternative to the manual breathing assist procedure.

  20. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  1. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  2. MP3 compression of Doppler ultrasound signals.

    Science.gov (United States)

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  3. A Novel Approach to Discovery and Image Access in the Petabyte Age

    Science.gov (United States)

    Mueller, D.; Dimitoglou, G.; Alexanderian, A.; Garcia Ortiz, J.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2009-12-01

    Space missions generate an ever-growing amount of data, as impressively highlighted by the Solar Dynamics Observatory's expected data rate of 1.4 Terabyte per day. In order to fully exploit their data, scientists need to be able to browse and visualize many different data products spanning a large range of physical length and time scales. So far, the tools available to the scientific community either require downloading all potentially relevant data sets beforehand in their entirety or provide only movies with a fixed resolution and cadence. To facilitate browsing and analysis of complex time-dependent data sets from multiple sources, we are developing JHelioviewer, a JPEG 2000-based visualization and discovery infrastructure for image data. Currently focused on solar physics data, it can easily be adapted for use in other areas of space and earth sciences. Together with its web-based counterpart helioviewer.org, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and allows users to search related event data bases. The user interface for JHelioviewer is a multi-platform Java client that can both communicate with a remote server via the JPEG 2000 interactive protocol JPIP and open local data. The random code stream access of JPIP minimizes data transfer and can encapsulate meta data as well as multiple image channels in one data stream. This presentation will illustrate some of the features of JHelioviewer and the advantages of JPEG 2000 as a new data compression standard.

  4. Plasma heating by adiabatic compression

    International Nuclear Information System (INIS)

    Ellis, R.A. Jr.

    1972-01-01

    These two lectures will cover the following three topics: (i) The application of adiabatic compression to toroidal devices is reviewed. The special case of adiabatic compression in tokamaks is considered in more detail, including a discussion of the equilibrium, scaling laws, and heating effects. (ii) The ATC (Adiabatic Toroidal Compressor) device which was completed in May 1972, is described in detail. Compression of a tokamak plasma across a static toroidal field is studied in this device. The device is designed to produce a pre-compression plasma with a major radius of 17 cm, toroidal field of 20 kG, and current of 90 kA. The compression leads to a plasma with major radius of 38 cm and minor radius of 10 cm. Scaling laws imply a density increase of a factor 6, temperature increase of a factor 3, and current increase of a factor 2.4. An additional feature of ATC is that it is a large tokamak which operates without a copper shell. (iii) Data which show that the expected MHD behavior is largely observed is presented and discussed. (U.S.)

  5. Concurrent data compression and protection

    International Nuclear Information System (INIS)

    Saeed, M.

    2009-01-01

    Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

  6. Compressible Fluid Suspension Performance Testing

    National Research Council Canada - National Science Library

    Hoogterp, Francis

    2003-01-01

    ... compressible fluid suspension system that was designed and installed on the vehicle by DTI. The purpose of the tests was to evaluate the possible performance benefits of the compressible fluid suspension system...

  7. Systolic Compression of Epicardial Coronary and Intramural Arteries

    Science.gov (United States)

    Mohiddin, Saidi A.; Fananapazir, Lameh

    2002-01-01

    It has been suggested that systolic compression of epicardial coronary arteries is an important cause of myocardial ischemia and sudden death in children with hypertrophic cardiomyopathy. We examined the associations between sudden death, systolic coronary compression of intra- and epicardial arteries, myocardial perfusion abnormalities, and severity of hypertrophy in children with hypertrophic cardiomyopathy. We reviewed the angiograms from 57 children with hypertrophic cardiomyopathy for the presence of coronary and septal artery compression; coronary compression was present in 23 (40%). The left anterior descending artery was most often affected, and multiple sites were found in 4 children. Myocardial perfusion abnormalities were more frequently present in children with coronary compression than in those without (94% vs 47%, P = 0.002). Coronary compression was also associated with more severe septal hypertrophy and greater left ventricular outflow gradient. Septal branch compression was present in 65% of the children and was significantly associated with coronary compression, severity of septal hypertrophy, and outflow obstruction. Multivariate analysis showed that septal thickness and septal branch compression, but not coronary compression, were independent predictors of perfusion abnormalities. Coronary compression was not associated with symptom severity, ventricular tachycardia, or a worse prognosis. We conclude that compression of coronary arteries and their septal branches is common in children with hypertrophic cardiomyopathy and is related to the magnitude of left ventricular hypertrophy. Our findings suggest that coronary compression does not make an important contribution to myocardial ischemia in hypertrophic cardiomyopathy; however, left ventricular hypertrophy and compression of intramural arteries may contribute significantly. (Tex Heart Inst J 2002;29:290–8) PMID:12484613

  8. Adiabatic Compressed Air Energy Storage with packed bed thermal energy storage

    International Nuclear Information System (INIS)

    Barbour, Edward; Mignard, Dimitri; Ding, Yulong; Li, Yongliang

    2015-01-01

    Highlights: • The paper presents a thermodynamic analysis of A-CAES using packed bed regenerators. • The packed beds are used to store the compression heat. • A numerical model is developed, validated and used to simulate system operation. • The simulated efficiencies are between 70.5% and 71.1% for continuous operation. • Heat build-up in the beds reduces continuous cycle efficiency slightly. - Abstract: The majority of articles on Adiabatic Compressed Air Energy Storage (A-CAES) so far have focussed on the use of indirect-contact heat exchangers and a thermal fluid in which to store the compression heat. While packed beds have been suggested, a detailed analysis of A-CAES with packed beds is lacking in the available literature. This paper presents such an analysis. We develop a numerical model of an A-CAES system with packed beds and validate it against analytical solutions. Our results suggest that an efficiency in excess of 70% should be achievable, which is higher than many of the previous estimates for A-CAES systems using indirect-contact heat exchangers. We carry out an exergy analysis for a single charge–storage–discharge cycle to see where the main losses are likely to transpire and we find that the main losses occur in the compressors and expanders (accounting for nearly 20% of the work input) rather than in the packed beds. The system is then simulated for continuous cycling and it is found that the build-up of leftover heat from previous cycles in the packed beds results in higher steady state temperature profiles of the packed beds. This leads to a small reduction (<0.5%) in efficiency for continuous operation

  9. Insertion profiles of 4 headless compression screws.

    Science.gov (United States)

    Hart, Adam; Harvey, Edward J; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A

    2013-09-01

    In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. The peak compression occurs at an insertion depth of -3.1 mm, -2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of -2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of -2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws, and enable the surgeon to optimize compression. Copyright

  10. Energy Conservation In Compressed Air Systems

    International Nuclear Information System (INIS)

    Yusuf, I.Y.; Dewu, B.B.M.

    2004-01-01

    Compressed air is an essential utility that accounts for a substantial part of the electricity consumption (bill) in most industrial plants. Although the general saying Air is free of charge is not true for compressed air, the utility's cost is not accorded the rightful importance due to its by most industries. The paper will show that the cost of 1 unit of energy in the form of compressed air is at least 5 times the cost electricity (energy input) required to produce it. The paper will also provide energy conservation tips in compressed air systems

  11. Compressed Data Structures for Range Searching

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Vind, Søren Juhl

    2015-01-01

    matrices and web graphs. Our contribution is twofold. First, we show how to compress geometric repetitions that may appear in standard range searching data structures (such as K-D trees, Quad trees, Range trees, R-trees, Priority R-trees, and K-D-B trees), and how to implement subsequent range queries...... on the compressed representation with only a constant factor overhead. Secondly, we present a compression scheme that efficiently identifies geometric repetitions in point sets, and produces a hierarchical clustering of the point sets, which combined with the first result leads to a compressed representation...

  12. Compression therapy after ankle fracture surgery

    DEFF Research Database (Denmark)

    Winge, R; Bayer, L; Gottlieb, H

    2017-01-01

    PURPOSE: The main purpose of this systematic review was to investigate the effect of compression treatment on the perioperative course of ankle fractures and describe its effect on edema, pain, ankle joint mobility, wound healing complication, length of stay (LOS) and time to surgery (TTS). The aim...... undergoing surgery, testing either intermittent pneumatic compression, compression bandage and/or compression stocking and reporting its effect on edema, pain, ankle joint mobility, wound healing complication, LOS and TTS. To conclude on data a narrative synthesis was performed. RESULTS: The review included...

  13. Effect of Kollidon VA®64 particle size and morphology as directly compressible excipient on tablet compression properties.

    Science.gov (United States)

    Chaudhary, R S; Patel, C; Sevak, V; Chan, M

    2018-01-01

    The study evaluates use of Kollidon VA ® 64 and a combination of Kollidon VA ® 64 with Kollidon VA ® 64 Fine as excipient in direct compression process of tablets. The combination of the two grades of material is evaluated for capping, lamination and excessive friability. Inter particulate void space is higher for such excipient due to the hollow structure of the Kollidon VA ® 64 particles. During tablet compression air remains trapped in the blend exhibiting poor compression with compromised physical properties of the tablets. Composition of Kollidon VA ® 64 and Kollidon VA ® 64 Fine is evaluated by design of experiment (DoE). A scanning electron microscopy (SEM) of two grades of Kollidon VA ® 64 exhibits morphological differences between coarse and fine grade. The tablet compression process is evaluated with a mix consisting of entirely Kollidon VA ® 64 and two mixes containing Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23 and 65:35. A statistical modeling on the results from the DoE trials resulted in the optimum composition for direct tablet compression as combination of Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23. This combination compressed with the predicted parameters based on the statistical modeling and applying main compression force between 5 and 15 kN, pre-compression force between 2 and 3 kN, feeder speed fixed at 25 rpm and compression range of 45-49 rpm produced tablets with hardness ranging between 19 and 21 kp, with no friability, capping, or lamination issue.

  14. Isentropic Compression of Argon

    International Nuclear Information System (INIS)

    Oona, H.; Solem, J.C.; Veeser, L.R.; Ekdahl, C.A.; Rodriquez, P.J.; Younger, S.M.; Lewis, W.; Turley, W.D.

    1997-01-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal

  15. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: I. general description

    Energy Technology Data Exchange (ETDEWEB)

    Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex

    2012-06-21

    Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the

  16. Confounding compression: the effects of posture, sizing and garment type on measured interface pressure in sports compression clothing.

    Science.gov (United States)

    Brophy-Williams, Ned; Driller, Matthew William; Shing, Cecilia Mary; Fell, James William; Halson, Shona Leigh; Halson, Shona Louise

    2015-01-01

    The purpose of this investigation was to measure the interface pressure exerted by lower body sports compression garments, in order to assess the effect of garment type, size and posture in athletes. Twelve national-level boxers were fitted with sports compression garments (tights and leggings), each in three different sizes (undersized, recommended size and oversized). Interface pressure was assessed across six landmarks on the lower limb (ranging from medial malleolus to upper thigh) as athletes assumed sitting, standing and supine postures. Sports compression leggings exerted a significantly higher mean pressure than sports compression tights (P sports compression garments is significantly affected by garment type, size and posture assumed by the wearer.

  17. Selecting a general-purpose data compression algorithm

    Science.gov (United States)

    Mathews, Gary Jason

    1995-01-01

    The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.

  18. User Guide for Compressible Flow Toolbox Version 2.1 for Use With MATLAB(Registered Trademark); Version 7

    Science.gov (United States)

    Melcher, Kevin J.

    2006-01-01

    This report provides a user guide for the Compressible Flow Toolbox, a collection of algorithms that solve almost 300 linear and nonlinear classical compressible flow relations. The algorithms, implemented in the popular MATLAB programming language, are useful for analysis of one-dimensional steady flow with constant entropy, friction, heat transfer, or shock discontinuities. The solutions do not include any gas dissociative effects. The toolbox also contains functions for comparing and validating the equation-solving algorithms against solutions previously published in the open literature. The classical equations solved by the Compressible Flow Toolbox are: isentropic-flow equations, Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section.), normal-shock equations, oblique-shock equations, and Prandtl-Meyer expansion equations. At the time this report was published, the Compressible Flow Toolbox was available without cost from the NASA Software Repository.

  19. Application of current guidelines for chest compression depth on different surfaces and using feedback devices: a randomized cross-over study.

    Science.gov (United States)

    Schober, P; Krage, R; Lagerburg, V; Van Groeningen, D; Loer, S A; Schwarte, L A

    2014-04-01

    Current cardiopulmonary resuscitation (CPR)-guidelines recommend an increased chest compression depth and rate compared to previous guidelines, and the use of automatic feedback devices is encouraged. However, it is unclear whether this compression depth can be maintained at an increased frequency. Moreover, the underlying surface may influence accuracy of feedback devices. We investigated compression depths over time and evaluated the accuracy of a feedback device on different surfaces. Twenty-four volunteers performed four two-minute blocks of CPR targeting at current guideline recommendations on different surfaces (floor, mattress, 2 backboards) on a patient simulator. Participants rested for 2 minutes between blocks. Influences of time and different surfaces on chest compression depth (ANOVA, mean [95% CI]) and accuracy of a feedback device to determine compression depth (Bland-Altman) were assessed. Mean compression depth did not reach recommended depth and decreased over time during all blocks (first block: from 42 mm [39-46 mm] to 39 mm [37-42 mm]). A two-minute resting period was insufficient to restore compression depth to baseline. No differences in compression depth were observed on different surfaces. The feedback device slightly underestimated compression depth on the floor (bias -3.9 mm), but markedly overestimated on the mattress (bias +12.6 mm). This overestimation was eliminated after correcting compression depth by a second sensor between manikin and mattress. Strategies are needed to improve chest compression depth, and more than two providers should alternate with chest compressions. The underlying surface does not necessarily adversely affect CPR performance but influences accuracy of feedback devices. Accuracy is improved by a second, posterior, sensor.

  20. Compression force behaviours: An exploration of the beliefs and values influencing the application of breast compression during screening mammography

    International Nuclear Information System (INIS)

    Murphy, Fred; Nightingale, Julie; Hogg, Peter; Robinson, Leslie; Seddon, Doreen; Mackay, Stuart

    2015-01-01

    This research project investigated the compression behaviours of practitioners during screening mammography. The study sought to provide a qualitative understanding of ‘how’ and ‘why’ practitioners apply compression force. With a clear conflict in the existing literature and little scientific evidence base to support the reasoning behind the application of compression force, this research project investigated the application of compression using a phenomenological approach. Following ethical approval, six focus group interviews were conducted at six different breast screening centres in England. A sample of 41 practitioners were interviewed within the focus groups together with six one-to-one interviews of mammography educators or clinical placement co-ordinators. The findings revealed two broad humanistic and technological categories consisting of 10 themes. The themes included client empowerment, white-lies, time for interactions, uncertainty of own practice, culture, power, compression controls, digital technology, dose audit-safety nets, numerical scales. All of these themes were derived from 28 units of significant meaning (USM). The results demonstrate a wide variation in the application of compression force, thus offering a possible explanation for the difference between practitioner compression forces found in quantitative studies. Compression force was applied in many different ways due to individual practitioner experiences and behaviour. Furthermore, the culture and the practice of the units themselves influenced beliefs and attitudes of practitioners in compression force application. The strongest recommendation to emerge from this study was the need for peer observation to enable practitioners to observe and compare their own compression force practice to that of their colleagues. The findings are significant for clinical practice in order to understand how and why compression force is applied

  1. Memory hierarchy using row-based compression

    Science.gov (United States)

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  2. Compressed Sensing with Rank Deficient Dictionaries

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Johansen, Daniel Højrup; Jørgensen, Peter Bjørn

    2012-01-01

    In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, i.e. rank deficient dictionaries. We show that in this case the signal-to-noise ratio...... (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary. As an example application of compressed sensing with a rank deficient dictionary, we present a case study of compressed sensing applied to the Coarse Acquisition (C...

  3. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    International Nuclear Information System (INIS)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex

    2012-01-01

    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  4. On the characterisation of the dynamic compressive behaviour of silicon carbides subjected to isentropic compression experiments

    Directory of Open Access Journals (Sweden)

    Zinszner Jean-Luc

    2015-01-01

    Full Text Available Ceramic materials are commonly used as protective materials particularly due to their very high hardness and compressive strength. However, the microstructure of a ceramic has a great influence on its compressive strength and on its ballistic efficiency. To study the influence of microstructural parameters on the dynamic compressive behaviour of silicon carbides, isentropic compression experiments have been performed on two silicon carbide grades using a high pulsed power generator called GEPI. Contrary to plate impact experiments, the use of the GEPI device and of the lagrangian analysis allows determining the whole loading path. The two SiC grades studied present different Hugoniot elastic limit (HEL due to their different microstructures. For these materials, the experimental technique allowed evaluating the evolution of the equivalent stress during the dynamic compression. It has been observed that these two grades present a work hardening more or less pronounced after the HEL. The densification of the material seems to have more influence on the HEL than the grain size.

  5. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  6. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  7. On Normalized Compression Distance and Large Malware

    OpenAIRE

    Borbely, Rebecca Schuller

    2015-01-01

    Normalized Compression Distance (NCD) is a popular tool that uses compression algorithms to cluster and classify data in a wide range of applications. Existing discussions of NCD's theoretical merit rely on certain theoretical properties of compression algorithms. However, we demonstrate that many popular compression algorithms don't seem to satisfy these theoretical properties. We explore the relationship between some of these properties and file size, demonstrating that this theoretical pro...

  8. SVD compression for magnetic resonance fingerprinting in the time domain.

    Science.gov (United States)

    McGivney, Debra F; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A

    2014-12-01

    Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.

  9. A hybrid data compression approach for online backup service

    Science.gov (United States)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  10. Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation: a manikin study.

    Science.gov (United States)

    Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Cho, Yun Kyung; You, Je Sung; Choi, Sung Wook; Kim, Ok Jun

    2013-07-01

    Recent studies have shown that there may be an interaction between duty cycle and other factors related to the quality of chest compression. Duty cycle represents the fraction of compression phase. We aimed to investigate the effect of shorter compression phase on average chest compression depth during metronome-guided cardiopulmonary resuscitation. Senior medical students performed 12 sets of chest compressions following the guiding sounds, with three down-stroke patterns (normal, fast and very fast) and four rates (80, 100, 120 and 140 compressions/min) in random sequence. Repeated-measures analysis of variance was used to compare the average chest compression depth and duty cycle among the trials. The average chest compression depth increased and the duty cycle decreased in a linear fashion as the down-stroke pattern shifted from normal to very fast (pmetronome-guided cardiopulmonary resuscitation.

  11. Oscillating patterns in image processing and nonlinear evolution equations the fifteenth Dean Jacqueline B. Lewis memorial lectures

    CERN Document Server

    Meyer, Yves

    2001-01-01

    Image compression, the Navier-Stokes equations, and detection of gravitational waves are three seemingly unrelated scientific problems that, remarkably, can be studied from one perspective. The notion that unifies the three problems is that of "oscillating patterns", which are present in many natural images, help to explain nonlinear equations, and are pivotal in studying chirps and frequency-modulated signals. The first chapter of this book considers image processing, more precisely algorithms of image compression and denoising. This research is motivated in particular by the new standard for compression of still images known as JPEG-2000. The second chapter has new results on the Navier-Stokes and other nonlinear evolution equations. Frequency-modulated signals and their use in the detection of gravitational waves are covered in the final chapter. In the book, the author describes both what the oscillating patterns are and the mathematics necessary for their analysis. It turns out that this mathematics invo...

  12. Compression Characteristics of Solid Wastes as Backfill Materials

    OpenAIRE

    Meng Li; Jixiong Zhang; Rui Gao

    2016-01-01

    A self-made large-diameter compression steel chamber and a SANS material testing machine were chosen to perform a series of compression tests in order to fully understand the compression characteristics of differently graded filling gangue samples. The relationship between the stress-deformation modulus and stress-compression degree was analyzed comparatively. The results showed that, during compression, the deformation modulus of gangue grew linearly with stress, the overall relationship bet...

  13. Dynamical instability of hot and compressed nuclei

    International Nuclear Information System (INIS)

    Ngo, C.; Leray, S.; Spina, M.E.; Ngo, H.

    1989-01-01

    The dynamical evolution of a hot and compressed nucleus is described by means of an extended liquid-drop model. Using only the continuity equation and the energy conservation we show that the system expands after a while. The possible global instabilities of the drop are studied by applying the general conditions of stability of dynamical systems. We find that the nucleus is unstable if it can reach a low density configuration (≅0.07 nucleon/fm 3 ). Such a configuration is obtained if the initial compression of the nucleus is large enough. It is shown that the thermal excitation energy has much less influence than the compressional energy. These instability conditions are in good agreement with those obtained previously within the framework of lattice percolation and the same model for the dynamical expansion. Since local instabilities may also very likely be present, we propose to study them using a restructured aggregation model. They lead to a multifragmentation of the system, a mechanism which is known experimentally to exist. We find that local instabilities occur at smaller (but very close) density values than global ones. A moment analysis of the calculated multifragmentation events allows to extract a critical exponent in excellent agreement with the one deduced experimentally from Au-induced reactions. (orig.)

  14. Exploring compression techniques for ROOT IO

    Science.gov (United States)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  15. Stress analysis of shear/compression test

    International Nuclear Information System (INIS)

    Nishijima, S.; Okada, T.; Ueno, S.

    1997-01-01

    Stress analysis has been made on the glass fiber reinforced plastics (GFRP) subjected to the combined shear and compression stresses by means of finite element method. The two types of experimental set up were analyzed, that is parallel and series method where the specimen were compressed by tilted jigs which enable to apply the combined stresses, to the specimen. Modified Tsai-Hill criterion was employed to judge the failure under the combined stresses that is the shear strength under the compressive stress. The different failure envelopes were obtained between the two set ups. In the parallel system the shear strength once increased with compressive stress then decreased. On the contrary in the series system the shear strength decreased monotonicly with compressive stress. The difference is caused by the different stress distribution due to the different constraint conditions. The basic parameters which control the failure under the combined stresses will be discussed

  16. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  17. Prevention of deep vein thrombosis in potential neurosurgical patients. A randomized trial comparing graduated compression stockings alone or graduated compression stockings plus intermittent pneumatic compression with control

    International Nuclear Information System (INIS)

    Turpie, A.G.; Hirsh, J.; Gent, M.; Julian, D.; Johnson, J.

    1989-01-01

    In a randomized trial of neurosurgical patients, groups wearing graduated compression stockings alone (group 1) or graduated compression stockings plus intermittent pneumatic compression (IPC) (group 2) were compared with an untreated control group in the prevention of deep vein thrombosis (DVT). In both active treatment groups, the graduated compression stockings were continued for 14 days or until hospital discharge, if earlier. In group 2, IPC was continued for seven days. All patients underwent DVT surveillance with iodine 125-labeled fibrinogen leg scanning and impedance plethysmography. Venography was carried out if either test became abnormal. Deep vein thrombosis occurred in seven (8.8%) of 80 patients in group 1, in seven (9.0%) of 78 patients in group 2, and in 16 (19.8%) of 81 patients in the control group. The observed differences among these rates are statistically significant. The results of this study indicate that graduated compression stockings alone or in combination with IPC are effective methods of preventing DVT in neurosurgical patients

  18. Compression-absorption (resorption) refrigerating machinery. Modeling of reactors; Machine frigorifique a compression-absorption (resorption). Modelisation des reacteurs

    Energy Technology Data Exchange (ETDEWEB)

    Lottin, O; Feidt, M; Benelmir, R [LEMTA-UHP Nancy-1, 54 - Vandoeuvre-les-Nancy (France)

    1998-12-31

    This paper is a series of transparencies presenting a comparative study of the thermal performances of different types of refrigerating machineries: di-thermal with vapor compression, tri-thermal with moto-compressor, with ejector, with free piston, adsorption-type, resorption-type, absorption-type, compression-absorption-type. A prototype of ammonia-water compression-absorption heat pump is presented and modeled. (J.S.)

  19. Compression-absorption (resorption) refrigerating machinery. Modeling of reactors; Machine frigorifique a compression-absorption (resorption). Modelisation des reacteurs

    Energy Technology Data Exchange (ETDEWEB)

    Lottin, O.; Feidt, M.; Benelmir, R. [LEMTA-UHP Nancy-1, 54 - Vandoeuvre-les-Nancy (France)

    1997-12-31

    This paper is a series of transparencies presenting a comparative study of the thermal performances of different types of refrigerating machineries: di-thermal with vapor compression, tri-thermal with moto-compressor, with ejector, with free piston, adsorption-type, resorption-type, absorption-type, compression-absorption-type. A prototype of ammonia-water compression-absorption heat pump is presented and modeled. (J.S.)

  20. Data Compression with Linear Algebra

    OpenAIRE

    Etler, David

    2015-01-01

    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  1. Tokamak plasma variations under rapid compression

    International Nuclear Information System (INIS)

    Holmes, J.A.; Peng, Y.K.M.; Lynch, S.J.

    1980-04-01

    Changes in plasmas undergoing large, rapid compressions are examined numerically over the following range of aspect ratios A:3 greater than or equal to A greater than or equal to 1.5 for major radius compressions of circular, elliptical, and D-shaped cross sections; and 3 less than or equal to A less than or equal to 6 for minor radius compressions of circular and D-shaped cross sections. The numerical approach combines the computation of fixed boundary MHD equilibria with single-fluid, flux-surface-averaged energy balance, particle balance, and magnetic flux diffusion equations. It is found that the dependences of plasma current I/sub p/ and poloidal beta anti β/sub p/ on the compression ratio C differ significantly in major radius compressions from those proposed by Furth and Yoshikawa. The present interpretation is that compression to small A dramatically increases the plasma current, which lowers anti β/sub p/ and makes the plasma more paramagnetic. Despite large values of toroidal beta anti β/sub T/ (greater than or equal to 30% with q/sub axis/ approx. = 1, q/sub edge/ approx. = 3), this tends to concentrate more toroidal flux near the magnetic axis, which means that a reduced minor radius is required to preserve the continuity of the toroidal flux function F at the plasma edge. Minor radius compressions to large aspect ratio agree well with the Furth-Yoshikawa scaling laws

  2. Benign compression fractures of the spine: signal patterns

    International Nuclear Information System (INIS)

    Ryu, Kyung Nam; Choi, Woo Suk; Lee, Sun Wha; Lim, Jae Hoon

    1992-01-01

    Fifteen patients with 38 compression fractures of the spine underwent magnetic resonance(MR) imaging. We retrospectively evaluated MR images in those benign compression fractures. MR images showed four patterns in T1-weighted images. MR imaging patterns were normal signal(21), band like low signal(8), low signal with preservation of peripheral portion of the body(8), and diffuse low signal through the vertebral body(1). The low signal portions were changed to high signal intensities in T2-weighted images. In 7 of 15 patients (11 compression fractures), there was a history of trauma, and the remaining 8 patients (27 compression fractures) had no history of trauma. Benign compression fractures of trauma, remained 8 patients (27 compression fractures) were non-traumatic. Benign compression fractures of the spine reveal variable signal intensities in MR imagings. These patterns of benign compression fractures may be useful in interpretation of MR imagings of the spine

  3. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  4. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  5. An efficient compression scheme for bitmap indices

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  6. A biological compression model and its applications.

    Science.gov (United States)

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  7. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  8. The compressed word problem for groups

    CERN Document Server

    Lohrey, Markus

    2014-01-01

    The Compressed Word Problem for Groups provides a detailed exposition of known results on the compressed word problem, emphasizing efficient algorithms for the compressed word problem in various groups. The author presents the necessary background along with the most recent results on the compressed word problem to create a cohesive self-contained book accessible to computer scientists as well as mathematicians. Readers will quickly reach the frontier of current research which makes the book especially appealing for students looking for a currently active research topic at the intersection of group theory and computer science. The word problem introduced in 1910 by Max Dehn is one of the most important decision problems in group theory. For many groups, highly efficient algorithms for the word problem exist. In recent years, a new technique based on data compression for providing more efficient algorithms for word problems, has been developed, by representing long words over group generators in a compres...

  9. ERGC: an efficient referential genome compression algorithm.

    Science.gov (United States)

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Nonpainful wide-area compression inhibits experimental pain.

    Science.gov (United States)

    Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena

    2016-09-01

    Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM.

  11. Density ratios in compressions driven by radiation pressure

    International Nuclear Information System (INIS)

    Lee, S.

    1988-01-01

    It has been suggested that in the cannonball scheme of laser compression the pellet may be considered to be compressed by the 'brute force' of the radiation pressure. For such a radiation-driven compression, an energy balance method is applied to give an equation fixing the radius compression ratio K which is a key parameter for such intense compressions. A shock model is used to yield specific results. For a square-pulse driving power compressing a spherical pellet with a specific heat ratio of 5/3, a density compression ratio Γ of 27 is computed. Double (stepped) pulsing with linearly rising power enhances Γ to 1750. The value of Γ is not dependent on the absolute magnitude of the piston power, as long as this is large enough. Further enhancement of compression by multiple (stepped) pulsing becomes obvious. The enhanced compression increases the energy gain factor G for a 100 μm DT pellet driven by radiation power of 10 16 W from 6 for a square pulse power with 0.5 MJ absorbed energy to 90 for a double (stepped) linearly rising pulse with absorbed energy of 0.4 MJ assuming perfect coupling efficiency. (author)

  12. A Monte Carlo model for mean glandular dose evaluation in spot compression mammography.

    Science.gov (United States)

    Sarno, Antonio; Dance, David R; van Engen, Ruben E; Young, Kenneth C; Russo, Paolo; Di Lillo, Francesca; Mettivier, Giovanni; Bliznakova, Kristina; Fei, Baowei; Sechopoulos, Ioannis

    2017-07-01

    To characterize the dependence of normalized glandular dose (DgN) on various breast model and image acquisition parameters during spot compression mammography and other partial breast irradiation conditions, and evaluate alternative previously proposed dose-related metrics for this breast imaging modality. Using Monte Carlo simulations with both simple homogeneous breast models and patient-specific breasts, three different dose-related metrics for spot compression mammography were compared: the standard DgN, the normalized glandular dose to only the directly irradiated portion of the breast (DgNv), and the DgN obtained by the product of the DgN for full field irradiation and the ratio of the mid-height area of the irradiated breast to the entire breast area (DgN M ). How these metrics vary with field-of-view size, spot area thickness, x-ray energy, spot area and position, breast shape and size, and system geometry was characterized for the simple breast model and a comparison of the simple model results to those with patient-specific breasts was also performed. The DgN in spot compression mammography can vary considerably with breast area. However, the difference in breast thickness between the spot compressed area and the uncompressed area does not introduce a variation in DgN. As long as the spot compressed area is completely within the breast area and only the compressed breast portion is directly irradiated, its position and size does not introduce a variation in DgN for the homogeneous breast model. As expected, DgN is lower than DgNv for all partial breast irradiation areas, especially when considering spot compression areas within the clinically used range. DgN M underestimates DgN by 6.7% for a W/Rh spectrum at 28 kVp and for a 9 × 9 cm 2 compression paddle. As part of the development of a new breast dosimetry model, a task undertaken by the American Association of Physicists in Medicine and the European Federation of Organizations of Medical Physics

  13. Compression and fast retrieval of SNP data.

    Science.gov (United States)

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Relationship between medical compression and intramuscular pressure as an explanation of a compression paradox.

    Science.gov (United States)

    Uhl, J-F; Benigni, J-P; Cornu-Thenard, A; Fournier, J; Blin, E

    2015-06-01

    Using standing magnetic resonance imaging (MRI), we recently showed that medical compression, providing an interface pressure (IP) of 22 mmHg, significantly compressed the deep veins of the leg but not, paradoxically, superficial varicose veins. To provide an explanation for this compression paradox by studying the correlation between the IP exerted by medical compression and intramuscular pressure (IMP). In 10 legs of five healthy subjects, we studied the effects of different IPs on the IMP of the medial gastrocnemius muscle. The IP produced by a cuff manometer was verified by a Picopress® device. The IMP was measured with a 21G needle connected to a manometer. Pressure data were recorded in the prone and standing positions with cuff manometer pressures from 0 to 50 mmHg. In the prone position, an IP of less than 20 did not significantly change the IMP. On the contrary, a perfect linear correlation with the IMP (r = 0.99) was observed with an IP from 20 to 50 mmHg. We found the same correlation in the standing position. We found that an IP of 22 mmHg produced a significant IMP increase from 32 to 54 mmHg, in the standing position. At the same time, the subcutaneous pressure is only provided by the compression device, on healthy subjects. In other words, the subcutaneous pressure plus the IP is only a little higher than 22 mmHg-a pressure which is too low to reduce the caliber of the superficial veins. This is in accordance with our standing MRI 3D anatomical study which showed that, paradoxically, when applying low pressures (IP), the deep veins are compressed while the superficial veins are not. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  15. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  16. Effects of flashlight guidance on chest compression performance in cardiopulmonary resuscitation in a noisy environment.

    Science.gov (United States)

    You, Je Sung; Chung, Sung Phil; Chang, Chul Ho; Park, Incheol; Lee, Hye Sun; Kim, SeungHo; Lee, Hahn Shick

    2013-08-01

    In real cardiopulmonary resuscitation (CPR), noise can arise from instructional voices and environmental sounds in places such as a battlefield and industrial and high-traffic areas. A feedback device using a flashing light was designed to overcome noise-induced stimulus saturation during CPR. This study was conducted to determine whether 'flashlight' guidance influences CPR performance in a simulated noisy setting. We recruited 30 senior medical students with no previous experience of using flashlight-guided CPR to participate in this prospective, simulation-based, crossover study. The experiment was conducted in a simulated noisy situation using a cardiac arrest model without ventilation. Noise such as patrol car and fire engine sirens was artificially generated. The flashlight guidance device emitted light pulses at the rate of 100 flashes/min. Participants also received instructions to achieve the desired rate of 100 compressions/min. CPR performances were recorded with a Resusci Anne mannequin with a computer skill-reporting system. There were significant differences between the control and flashlight groups in mean compression rate (MCR), MCR/min and visual analogue scale. However, there were no significant differences in correct compression depth, mean compression depth, correct hand position, and correctly released compression. The flashlight group constantly maintained the pace at the desired 100 compressions/min. Furthermore, the flashlight group had a tendency to keep the MCR constant, whereas the control group had a tendency to decrease it after 60 s. Flashlight-guided CPR is particularly advantageous for maintaining a desired MCR during hands-only CPR in noisy environments, where metronome pacing might not be clearly heard.

  17. Crystal and Particle Engineering Strategies for Improving Powder Compression and Flow Properties to Enable Continuous Tablet Manufacturing by Direct Compression.

    Science.gov (United States)

    Chattoraj, Sayantan; Sun, Changquan Calvin

    2018-04-01

    Continuous manufacturing of tablets has many advantages, including batch size flexibility, demand-adaptive scale up or scale down, consistent product quality, small operational foot print, and increased manufacturing efficiency. Simplicity makes direct compression the most suitable process for continuous tablet manufacturing. However, deficiencies in powder flow and compression of active pharmaceutical ingredients (APIs) limit the range of drug loading that can routinely be considered for direct compression. For the widespread adoption of continuous direct compression, effective API engineering strategies to address power flow and compression problems are needed. Appropriate implementation of these strategies would facilitate the design of high-quality robust drug products, as stipulated by the Quality-by-Design framework. Here, several crystal and particle engineering strategies for improving powder flow and compression properties are summarized. The focus is on the underlying materials science, which is the foundation for effective API engineering to enable successful continuous manufacturing by the direct compression process. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  18. Compressed gas fuel storage system

    Science.gov (United States)

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  19. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  20. A review on compressed pattern matching

    Directory of Open Access Journals (Sweden)

    Surya Prakash Mishra

    2016-09-01

    Full Text Available Compressed pattern matching (CPM refers to the task of locating all the occurrences of a pattern (or set of patterns inside the body of compressed text. In this type of matching, pattern may or may not be compressed. CPM is very useful in handling large volume of data especially over the network. It has many applications in computational biology, where it is useful in finding similar trends in DNA sequences; intrusion detection over the networks, big data analytics etc. Various solutions have been provided by researchers where pattern is matched directly over the uncompressed text. Such solution requires lot of space and consumes lot of time when handling the big data. Various researchers have proposed the efficient solutions for compression but very few exist for pattern matching over the compressed text. Considering the future trend where data size is increasing exponentially day-by-day, CPM has become a desirable task. This paper presents a critical review on the recent techniques on the compressed pattern matching. The covered techniques includes: Word based Huffman codes, Word Based Tagged Codes; Wavelet Tree Based Indexing. We have presented a comparative analysis of all the techniques mentioned above and highlighted their advantages and disadvantages.

  1. 30 CFR 57.13020 - Use of compressed air.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  2. Relationship between the edgewise compression strength of ...

    African Journals Online (AJOL)

    The results of this study were used to determine the linear regression constants in the Maltenfort model by correlating the measured board edgewise compression strength (ECT) with the predicted strength, using the paper components' compression strengths, measured with the short-span compression test (SCT) and the ...

  3. ROI-based DICOM image compression for telemedicine

    Indian Academy of Sciences (India)

    ground and reconstruct the image portions losslessly. The compressed image can ... If the image is compressed by 8:1 compression without any perceptual distortion, the ... Figure 2. Cross-sectional view of medical image (statistical representation). ... The Integer Wavelet Transform (IWT) is used to have lossless processing.

  4. Quasi-isentropic compression using compressed water flow generated by underwater electrical explosion of a wire array

    Science.gov (United States)

    Gurovich, V.; Virozub, A.; Rososhek, A.; Bland, S.; Spielman, R. B.; Krasik, Ya. E.

    2018-05-01

    A major experimental research area in material equation-of-state today involves the use of off-Hugoniot measurements rather than shock experiments that give only Hugoniot data. There is a wide range of applications using quasi-isentropic compression of matter including the direct measurement of the complete isentrope of materials in a single experiment and minimizing the heating of flyer plates for high-velocity shock measurements. We propose a novel approach to generating quasi-isentropic compression of matter. Using analytical modeling and hydrodynamic simulations, we show that a working fluid composed of compressed water, generated by an underwater electrical explosion of a planar wire array, might be used to efficiently drive the quasi-isentropic compression of a copper target to pressures ˜2 × 1011 Pa without any complex target designs.

  5. Mistakes and Pitfalls Associated with Two-Point Compression Ultrasound for Deep Vein Thrombosis

    Directory of Open Access Journals (Sweden)

    Tony Zitek, MD

    2016-03-01

    most common mistake made by the residents was inadequate visualization of the popliteal vein. Conclusion: Two-point compression ultrasound does not identify isolated SFV thrombi, which reduces its sensitivity. Moreover, this technique may be more difficult than previously reported, in part because novice ultrasonographers have difficulty properly assessing the popliteal vein.

  6. Eccentric crank variable compression ratio mechanism

    Science.gov (United States)

    Lawrence, Keith Edward [Kobe, JP; Moser, William Elliott [Peoria, IL; Roozenboom, Stephan Donald [Washington, IL; Knox, Kevin Jay [Peoria, IL

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  7. How Wage Compression Affects Job Turnover

    OpenAIRE

    Heyman, Fredrik

    2008-01-01

    I use Swedish establishment-level panel data to test Bertola and Rogerson’s (1997) hypothesis of a positive relation between the degree of wage compression and job reallocation. Results indicate that the effect of wage compression on job turnover is positive and significant in the manufacturing sector. The wage compression effect is stronger on job destruction than on job creation, consistent with downward wage rigidity. Further results include a strong positive relationship between the fract...

  8. CoGI: Towards Compressing Genomes as an Image.

    Science.gov (United States)

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  9. 30 CFR 56.13020 - Use of compressed air.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...

  10. Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    Science.gov (United States)

    Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; hide

    2015-01-01

    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.

  11. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  12. Offshore compression system design for low cost high and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Carlos J. Rocha de O.; Carrijo Neto, Antonio Dias; Cordeiro, Alexandre Franca [Chemtech Engineering Services and Software Ltd., Rio de Janeiro, RJ (Brazil). Special Projects Div.], Emails: antonio.carrijo@chemtech.com.br, carlos.rocha@chemtech.com.br, alexandre.cordeiro@chemtech.com.br

    2010-07-01

    In the offshore oil fields, the oil streams coming from the wells usually have significant amounts of gas. This gas is separated at low pressure and has to be compressed to the export pipeline pressure, usually at high pressure to reduce the needed diameter of the pipelines. In the past, this gases where flared, but nowadays there are a increasing pressure for the energy efficiency improvement of the oil rigs and the use of this gaseous fraction. The most expensive equipment of this kind of plant are the compression and power generation systems, being the second a strong function of the first, because the most power consuming equipment are the compressors. For this reason, the optimization of the compression system in terms of efficiency and cost are determinant to the plant profit. The availability of the plants also have a strong influence in the plant profit, specially in gas fields where the products have a relatively low aggregated value, compared to oil. Due this, the third design variable of the compression system becomes the reliability. As high the reliability, larger will be the plant production. The main ways to improve the reliability of compression system are the use of multiple compression trains in parallel, in a 2x50% or 3x50% configuration, with one in stand-by. Such configurations are possible and have some advantages and disadvantages, but the main side effect is the increase of the cost. This is the offshore common practice, but that does not always significantly improve the plant availability, depending of the previous process system. A series arrangement and a critical evaluation of the overall system in some cases can provide a cheaper system with equal or better performance. This paper shows a case study of the procedure to evaluate a compression system design to improve the reliability but without extreme cost increase, balancing the number of equipment, the series or parallel arrangement, and the driver selection. Two cases studies will be

  13. Modeling the mechanical and compression properties of polyamide/elastane knitted fabrics used in compression sportswear

    NARCIS (Netherlands)

    Maqsood, Muhammad

    2016-01-01

    A compression sportswear fabric should have excellent stretch and recovery properties in order to improve the performance of the sportsman. The objective of this study was to investigate the effect of elastane linear density and loop length on the stretch, recovery, and compression properties of the

  14. Equation of state of laser-shocked compressed iron; Equation d'etat du fer comprime par choc laser

    Energy Technology Data Exchange (ETDEWEB)

    Huser, G

    2004-01-01

    This thesis enters the field of highly compressed materials equation of state studies. In particular, it focuses on the case of laser shock compressed iron. This work indeed aims at getting to the conditions of the earth's core, comprising a solid inner core and a liquid outer core. The understanding of phenomena governing the core's thermodynamics and the geodynamic process requires the knowledge of iron melting line locus around the solid-liquid interface at 3.3 Mbar. Several experiments were performed to that extent. First, an absolute measurement of iron Hugoniot was obtained. Following is a study of partially released states of iron into a window material: lithium fluoride (LiF). This configuration enables direct access to compressed iron optical properties such as reflectivity and self-emission. Interface velocity measurement is dominated by compressed LiF optical properties and is used as a pressure gauge. Using a dual wavelength reflectivity diagnostic, compressed iron electrical conductivity was estimated and found to be in good agreement with previous results found in geophysics literature. Self-emission diagnostic was used to measure temperature of partially released iron and revealed a solid-liquid phase transition at Mbar pressures. (author)

  15. Compressive Online Robust Principal Component Analysis with Multiple Prior Information

    DEFF Research Database (Denmark)

    Van Luong, Huynh; Deligiannis, Nikos; Seiler, Jürgen

    -rank components. Unlike conventional batch RPCA, which processes all the data directly, our method considers a small set of measurements taken per data vector (frame). Moreover, our method incorporates multiple prior information signals, namely previous reconstructed frames, to improve these paration...... and thereafter, update the prior information for the next frame. Using experiments on synthetic data, we evaluate the separation performance of the proposed algorithm. In addition, we apply the proposed algorithm to online video foreground and background separation from compressive measurements. The results show...

  16. Task-oriented lossy compression of magnetic resonance images

    Science.gov (United States)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  17. Light-weight reference-based compression of FASTQ data.

    Science.gov (United States)

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  18. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  19. Comparison of compression properties of stretchable knitted fabrics and bi-stretch woven fabrics for compression garments

    NARCIS (Netherlands)

    Maqsood, Muhammad

    2017-01-01

    Stretchable fabrics have diverse applications ranging from casual apparel to performance sportswear and compression therapy. Compression therapy is the universally accepted treatment for the management of hypertrophic scarring after severe burns. Mostly stretchable knitted fabrics are used in

  20. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    Science.gov (United States)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  1. Compressed Air/Vacuum Transportation Techniques

    Science.gov (United States)

    Guha, Shyamal

    2011-03-01

    General theory of compressed air/vacuum transportation will be presented. In this transportation, a vehicle (such as an automobile or a rail car) is powered either by compressed air or by air at near vacuum pressure. Four version of such transportation is feasible. In all versions, a ``c-shaped'' plastic or ceramic pipe lies buried a few inches under the ground surface. This pipe carries compressed air or air at near vacuum pressure. In type I transportation, a vehicle draws compressed air (or vacuum) from this buried pipe. Using turbine or reciprocating air cylinder, mechanical power is generated from compressed air (or from vacuum). This mechanical power transferred to the wheels of an automobile (or a rail car) drives the vehicle. In type II-IV transportation techniques, a horizontal force is generated inside the plastic (or ceramic) pipe. A set of vertical and horizontal steel bars is used to transmit this force to the automobile on the road (or to a rail car on rail track). The proposed transportation system has following merits: virtually accident free; highly energy efficient; pollution free and it will not contribute to carbon dioxide emission. Some developmental work on this transportation will be needed before it can be used by the traveling public. The entire transportation system could be computer controlled.

  2. Wave energy devices with compressible volumes.

    Science.gov (United States)

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-12-08

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

  3. Entire Sound Representations Are Time-Compressed in Sensory Memory: Evidence from MMN.

    Science.gov (United States)

    Tamakoshi, Seiji; Minoura, Nanako; Katayama, Jun'ichi; Yagi, Akihiro

    2016-01-01

    In order to examine the encoding of partial silence included in a sound stimulus in neural representation, time flow of the sound representations was investigated using mismatch negativity (MMN), an ERP component that reflects neural representation in auditory sensory memory. Previous work suggested that time flow of auditory stimuli is compressed in neural representations. The stimuli used were a full-stimulus of 170 ms duration, an early-gap stimulus with silence for a 20-50 ms segment (i.e., an omitted segment), and a late-gap stimulus with an omitted segment of 110-140 ms. Peak MMNm latencies from oddball sequences of these stimuli, with a 500 ms SOA, did not reflect time point of the physical gap, suggesting that temporal information can be compressed in sensory memory. However, it was not clear whether the whole stimulus duration or only the omitted segment duration is compressed. Thus, stimuli were used in which the gap was replaced by a tone segment with a 1/4 sound pressure level (filled), as well as the gap stimuli. Combinations of full-stimuli and one of four gapped or filled stimuli (i.e., early gap, late gap, early filled, and late filled) were presented in an oddball sequence (85 vs. 15%). If compression occurs only for the gap duration, MMN latency for filled stimuli should show a different pattern from those for gap stimuli. MMN latencies for the filled conditions showed the same pattern as those for the gap conditions, indicating that the whole stimulus duration rather than only gap duration is compressed in sensory memory neural representation. These results suggest that temporal aspects of silence are encoded in the same manner as physical sound.

  4. Isostatic compression of buffer blocks. Middle scale

    International Nuclear Information System (INIS)

    Ritola, J.; Pyy, E.

    2012-01-01

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  5. Isostatic compression of buffer blocks. Middle scale

    Energy Technology Data Exchange (ETDEWEB)

    Ritola, J.; Pyy, E. [VTT Technical Research Centre of Finland, Espoo (Finland)

    2012-01-15

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  6. Fast lossless compression via cascading Bloom filters.

    Science.gov (United States)

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  7. An unusual case: right proximal ureteral compression by the ovarian vein and distal ureteral compression by the external iliac vein

    Directory of Open Access Journals (Sweden)

    Halil Ibrahim Serin

    2015-12-01

    Full Text Available A 32-years old woman presented to the emergency room of Bozok University Research Hospital with right renal colic. Multidetector computed tomography (MDCT showed compression of the proximal ureter by the right ovarian vein and compression of the right distal ureter by the right external iliac vein. To the best of our knowledge, right proximal ureteral compression by the ovarian vein together with distal ureteral compression by the external iliac vein have not been reported in the literature. Ovarian vein and external iliac vein compression should be considered in patients presenting to the emergency room with renal colic or low back pain and a dilated collecting system.

  8. Entropy Stable Staggered Grid Spectral Collocation for the Burgers' and Compressible Navier-Stokes Equations

    Science.gov (United States)

    Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.

    2015-01-01

    Staggered grid, entropy stable discontinuous spectral collocation operators of any order are developed for Burgers' and the compressible Navier-Stokes equations on unstructured hexahedral elements. This generalization of previous entropy stable spectral collocation work [1, 2], extends the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to a combination of tensor product Legendre-Gauss (LG) and LGL points. The new semi-discrete operators discretely conserve mass, momentum, energy and satisfy a mathematical entropy inequality for both Burgers' and the compressible Navier-Stokes equations in three spatial dimensions. They are valid for smooth as well as discontinuous flows. The staggered LG and conventional LGL point formulations are compared on several challenging test problems. The staggered LG operators are significantly more accurate, although more costly to implement. The LG and LGL operators exhibit similar robustness, as is demonstrated using test problems known to be problematic for operators that lack a nonlinearly stability proof for the compressible Navier-Stokes equations (e.g., discontinuous Galerkin, spectral difference, or flux reconstruction operators).

  9. Compressibility-aware media retargeting with structure preserving.

    Science.gov (United States)

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-03-01

    A number of algorithms have been proposed for intelligent image/video retargeting with image content retained as much as possible. However, they usually suffer from some artifacts in the results, such as ridge or structure twist. In this paper, we present a structure-preserving media retargeting technique that preserves the content and image structure as best as possible. Different from the previous pixel or grid based methods, we estimate the image content saliency from the structure of the content. A block structure energy is introduced with a top-down strategy to constrain the image structure inside to deform uniformly in either x or y direction. However, the flexibilities for retargeting are quite different for different images. To cope with this problem, we propose a compressibility assessment scheme for media retargeting by combining the entropies of image gradient magnitude and orientation distributions. Thus, the resized media is produced to preserve the image content and structure as best as possible. Our experiments demonstrate that the proposed method provides resized images/videos with better preservation of content and structure than those by the previous methods.

  10. JHelioviewer: Exploring Petabytes of Solar Images

    Science.gov (United States)

    Mueller, Daniel; Fleck, Bernhard; Dimitoglou, George; Garcia Ortiz, Juan Pablo; Schmidt, Ludwig; Hughitt, Keith; Ireland, Jack

    Space missions generate an ever-growing amount of data, as impressively highlighted by the Solar Dynamics Observatory's (SDO) expected return of 1.4 TByte/day. In order to fully ex-ploit their data, scientists need to be able to browse and visualize many different data products spanning a large range of physical length and time scales. So far, the tools available to the scientific community either require downloading all potentially relevant data sets beforehand in their entirety or provide only movies with a fixed resolution and cadence. For SDO, the former approach is prohibitive due to the shear data volume, while the latter does not do justice to the high resolution and cadence of the images. To address this challenge, we have developed JHelioviewer, a JPEG 2000-based visualization and discovery software for solar image data. JHelioviewer makes the vast amount of SDO images available to the worldwide community, lets users browse more than 14 years worth of images from the Solar and Heliospheric Observatory (SOHO) and facilitates browsing and analysis of complex time-dependent data sets from mul-tiple sources in general. The user interface for JHelioviewer is a multi-platform Java client that communicates with a remote server via the JPEG 2000 interactive protocol JPIP. The random code stream access of JPIP minimizes data transfer and can encapsulate metadata as well as multiple image channels in one data stream. This presentation will illustrate the features of JHelioviewer and highlight the advantages of JPEG 2000 as a new data compression standard.

  11. Compressed Air Production Using Vehicle Suspension

    OpenAIRE

    Ninad Arun Malpure; Sanket Nandlal Bhansali

    2015-01-01

    Abstract Generally compressed air is produced using different types of air compressors which consumes lot of electric energy and is noisy. In this paper an innovative idea is put forth for production of compressed air using movement of vehicle suspension which normal is wasted. The conversion of the force energy into the compressed air is carried out by the mechanism which consists of the vehicle suspension system hydraulic cylinder Non-return valve air compressor and air receiver. We are co...

  12. A privacy-preserving solution for compressed storage and selective retrieval of genomic data.

    Science.gov (United States)

    Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S; Molyneaux, Adam; Xu, Zhenyu; Fellay, Jacques; Steinmetz, Lars M; Hubaux, Jean-Pierre

    2016-12-01

    In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients' complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. © 2016 Huang et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Spectral Interpolation on 3 x 3 Stencils for Prediction and Compression

    Energy Technology Data Exchange (ETDEWEB)

    Ibarria, L; Lindstrom, P; Rossignac, J

    2007-06-25

    Many scientific, imaging, and geospatial applications produce large high-precision scalar fields sampled on a regular grid. Lossless compression of such data is commonly done using predictive coding, in which weighted combinations of previously coded samples known to both encoder and decoder are used to predict subsequent nearby samples. In hierarchical, incremental, or selective transmission, the spatial pattern of the known neighbors is often irregular and varies from one sample to the next, which precludes prediction based on a single stencil and fixed set of weights. To handle such situations and make the best use of available neighboring samples, we propose a local spectral predictor that offers optimal prediction by tailoring the weights to each configuration of known nearby samples. These weights may be precomputed and stored in a small lookup table. We show through several applications that predictive coding using our spectral predictor improves compression for various sources of high-precision data.

  14. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  15. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  16. LSP Simulations of the Neutralized Drift Compression Experiment

    CERN Document Server

    Thoma, Carsten H; Gilson, Erik P; Henestroza, Enrique; Roy, Prabir K; Welch, Dale; Yu, Simon

    2005-01-01

    The Neutralized Drift Compression Experiment (NDCX) at Lawrence Berkeley National Laboratory involves the longitudinal compression of a singly-stripped K ion beam with a mean energy of 250 keV in a meter long plasma. We present simulation results of compression of the NDCX beam using the PIC code LSP. The NDCX beam encounters an acceleration gap with a time-dependent voltage that decelerates the front and accelerates the tail of a 500 ns pulse which is to be compressed 110 cm downstream. The simulations model both ideal and experimental voltage waveforms. Results show good longitudinal compression without significant emittance growth.

  17. Rectal perforation by compressed air.

    Science.gov (United States)

    Park, Young Jin

    2017-07-01

    As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed.

  18. Comparison of Open-Hole Compression Strength and Compression After Impact Strength on Carbon Fiber/Epoxy Laminates for the Ares I Composite Interstage

    Science.gov (United States)

    Hodge, Andrew J.; Nettles, Alan T.; Jackson, Justin R.

    2011-01-01

    Notched (open hole) composite laminates were tested in compression. The effect on strength of various sizes of through holes was examined. Results were compared to the average stress criterion model. Additionally, laminated sandwich structures were damaged from low-velocity impact with various impact energy levels and different impactor geometries. The compression strength relative to damage size was compared to the notched compression result strength. Open-hole compression strength was found to provide a reasonable bound on compression after impact.

  19. Shock compression and quasielastic release in tantalum

    International Nuclear Information System (INIS)

    Johnson, J.N.; Hixson, R.S.; Tonks, D.L.; Gray, G.T. III

    1994-01-01

    Previous studies of quasielastic release in shock-loaded FCC metals have shown a strong influence of the defect state on the leading edge, or first observable arrival, of the release wave. This is due to the large density of pinned dislocation segments behind the shock front, their relatively large pinning separation, and a very short response time as determined by the drag coefficient in the shock-compressed state. This effect is entirely equivalent to problems associated with elastic moduli determination using ultrasonic methods. This is particularly true for FCC metals, which have an especially low Peierls stress, or inherent lattice resistance, that has little influence in pinning dislocation segments and inhibiting anelastic deformation. BCC metals, on the other hand, have a large Peierls stress that essentially holds dislocation segments in place at low net applied shear stresses and thus allows fully elastic deformation to occur in the complete absence of anelastic behavior. Shock-compression and release experiments have been performed on tantalum (BCC), with the observation that the leading release disturbance is indeed elastic. This conclusion is established by examination of experimental VISAR records taken at the tantalum/sapphire (window) interface in a symmetric-impact experiment which subjects the sample to a peak longitudinal stress of approximately 7.3 GPa, in comparison with characteristic code calculations. copyright 1994 American Institute of Physics

  20. Mathematical transforms and image compression: A review

    Directory of Open Access Journals (Sweden)

    Satish K. Singh

    2010-07-01

    Full Text Available It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed.