WorldWideScience

Sample records for methods lossy compression

  1. StirMark Benchmark: audio watermarking attacks based on lossy compression

    Science.gov (United States)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  2. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  3. Task-oriented lossy compression of magnetic resonance images

    Science.gov (United States)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  4. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Liantao Wu

    2015-08-01

    Full Text Available Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links.

  5. Lossy compression of quality scores in genomic data.

    Science.gov (United States)

    Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew

    2014-08-01

    Next-generation sequencing technologies are revolutionizing medicine. Data from sequencing technologies are typically represented as a string of bases, an associated sequence of per-base quality scores and other metadata, and in aggregate can require a large amount of space. The quality scores show how accurate the bases are with respect to the sequencing process, that is, how confident the sequencer is of having called them correctly, and are the largest component in datasets in which they are retained. Previous research has examined how to store sequences of bases effectively; here we add to that knowledge by examining methods for compressing quality scores. The quality values originate in a continuous domain, and so if a fidelity criterion is introduced, it is possible to introduce flexibility in the way these values are represented, allowing lossy compression over the quality score data. We present existing compression options for quality score data, and then introduce two new lossy techniques. Experiments measuring the trade-off between compression ratio and information loss are reported, including quantifying the effect of lossy representations on a downstream application that carries out single nucleotide polymorphism and insert/deletion detection. The new methods are demonstrably superior to other techniques when assessed against the spectrum of possible trade-offs between storage required and fidelity of representation. An implementation of the methods described here is available at https://github.com/rcanovas/libCSAM. rcanovas@student.unimelb.edu.au Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Spectral Distortion in Lossy Compression of Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Bruno Aiazzi

    2012-01-01

    Full Text Available Distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is investigated, with the aim of minimizing the spectral distortion between original and decompressed data. The absolute angular error, or spectral angle mapper (SAM, is used to quantify spectral distortion, while radiometric distortions are measured by maximum absolute deviation (MAD for near-lossless methods, for example, differential pulse code modulation (DPCM, or mean-squared error (MSE for lossy methods, for example, spectral decorrelation followed by JPEG 2000. Two strategies of interband distortion allocation are compared: given a target average bit rate, distortion may be set to be constant with wavelength. Otherwise, it may be allocated proportionally to the noise level of each band, according to the virtually lossless protocol. Comparisons with the uncompressed originals show that the average SAM of radiance spectra is minimized by constant distortion allocation to radiance data. However, variable distortion allocation according to the virtually lossless protocol yields significantly lower SAM in case of reflectance spectra obtained from compressed radiance data, if compared with the constant distortion allocation at the same compression ratio.

  7. Using off-the-shelf lossy compression for wireless home sleep staging.

    Science.gov (United States)

    Lan, Kun-Chan; Chang, Da-Wei; Kuo, Chih-En; Wei, Ming-Zhi; Li, Yu-Hung; Shaw, Fu-Zen; Liang, Sheng-Fu

    2015-05-15

    Recently, there has been increasing interest in the development of wireless home sleep staging systems that allow the patient to be monitored remotely while remaining in the comfort of their home. However, transmitting large amount of Polysomnography (PSG) data over the Internet is an important issue needed to be considered. In this work, we aim to reduce the amount of PSG data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to classify sleep stages. We examine the effects of off-the-shelf lossy compression on an all-night PSG dataset from 20 healthy subjects, in the context of automated sleep staging. The popular compression method Set Partitioning in Hierarchical Trees (SPIHT) was used, and a range of compression levels was selected in order to compress the signals with various degrees of loss. In addition, a rule-based automatic sleep staging method was used to automatically classify the sleep stages. Considering the criteria of clinical usefulness, the experimental results show that the system can achieve more than 60% energy saving with a high accuracy (>84%) in classifying sleep stages by using a lossy compression algorithm like SPIHT. As far as we know, our study is the first that focuses how much loss can be tolerated in compressing complex multi-channel PSG data for sleep analysis. We demonstrate the feasibility of using lossy SPIHT compression for wireless home sleep staging. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Progress with lossy compression of data from the Community Earth System Model

    Science.gov (United States)

    Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.

    2017-12-01

    Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.

  9. Lossy compression for Animated Web Visualisation

    Science.gov (United States)

    Prudden, R.; Tomlinson, J.; Robinson, N.; Arribas, A.

    2017-12-01

    This talk will discuss an technique for lossy data compression specialised for web animation. We set ourselves the challenge of visualising a full forecast weather field as an animated 3D web page visualisation. This data is richly spatiotemporal, however it is routinely communicated to the public as a 2D map, and scientists are largely limited to visualising data via static 2D maps or 1D scatter plots. We wanted to present Met Office weather forecasts in a way that represents all the generated data. Our approach was to repurpose the technology used to stream high definition videos. This enabled us to achieve high rates of compression, while being compatible with both web browsers and GPU processing. Since lossy compression necessarily involves discarding information, evaluating the results is an important and difficult problem. This is essentially a problem of forecast verification. The difficulty lies in deciding what it means for two weather fields to be "similar", as simple definitions such as mean squared error often lead to undesirable results. In the second part of the talk, I will briefly discuss some ideas for alternative measures of similarity.

  10. ChIPWig: a random access-enabling lossless and lossy compression method for ChIP-seq data.

    Science.gov (United States)

    Ravanmehr, Vida; Kim, Minji; Wang, Zhiying; Milenkovic, Olgica

    2018-03-15

    Chromatin immunoprecipitation sequencing (ChIP-seq) experiments are inexpensive and time-efficient, and result in massive datasets that introduce significant storage and maintenance challenges. To address the resulting Big Data problems, we propose a lossless and lossy compression framework specifically designed for ChIP-seq Wig data, termed ChIPWig. ChIPWig enables random access, summary statistics lookups and it is based on the asymptotic theory of optimal point density design for nonuniform quantizers. We tested the ChIPWig compressor on 10 ChIP-seq datasets generated by the ENCODE consortium. On average, lossless ChIPWig reduced the file sizes to merely 6% of the original, and offered 6-fold compression rate improvement compared to bigWig. The lossy feature further reduced file sizes 2-fold compared to the lossless mode, with little or no effects on peak calling and motif discovery using specialized NarrowPeaks methods. The compression and decompression speed rates are of the order of 0.2 sec/MB using general purpose computers. The source code and binaries are freely available for download at https://github.com/vidarmehr/ChIPWig-v2, implemented in C ++. milenkov@illinois.edu. Supplementary data are available at Bioinformatics online.

  11. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  12. Lossy compression of TPC data and trajectory tracking efficiency for the ALICE experiment

    International Nuclear Information System (INIS)

    Nicolaucig, A.; Ivanov, M.; Mattavelli, M.

    2003-01-01

    In this paper a quasi-lossless algorithm for the on-line compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN is described. The algorithm is based on a lossy source code modeling technique, i.e. it is based on a source model which is lossy if samples of the TPC signal are considered one by one; conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse, representing the pulse charge and the time localization of the pulse. So as to evaluate the consequences of the error introduced by the lossy compression process, the results of the trajectory tracking algorithms that process data off-line after the experiment are analyzed, in particular, versus their sensibility to the noise introduced by the compression. Two different versions of these off-line algorithms are described, performing cluster finding and particle tracking. The results on how these algorithms are affected by the lossy compression are reported. Entropy coding can be applied to the set of events defined by the source model to reduce the bit rate to the corresponding source entropy. Using TPC simulated data according to the expected ALICE TPC performance, the compression algorithm achieves a data reduction in the range of 34.2% down to 23.7% of the original data rate depending on the desired precision on the pulse center of mass. The number of operations per input symbol required to implement the algorithm is relatively low, so that a real-time implementation of the compression process embedded in the TPC data acquisition chain using low-cost integrated electronics is a realistic option to effectively reduce the data storing cost of ALICE experiment

  13. Lossy compression of TPC data and trajectory tracking efficiency for the ALICE experiment

    CERN Document Server

    Nicolaucig, A; Mattavelli, M

    2003-01-01

    In this paper a quasi-lossless algorithm for the on-line compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN is described. The algorithm is based on a lossy source code modeling technique, i.e. it is based on a source model which is lossy if samples of the TPC signal are considered one by one; conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse, representing the pulse charge and the time localization of the pulse. So as to evaluate the consequences of the error introduced by the lossy compression process, the results of the trajectory tracking algorithms that process data off-line after the experiment are analyzed, in particular, versus their sensibility to the noise introduced by the compression. Two different versions of these off- line algorithms are described,...

  14. An Evaluation Framework for Lossy Compression of Genome Sequencing Quality Values

    OpenAIRE

    Alberti, Claudio; Daniels, Noah; Hernaez, Mikel; Voges, Jan; Goldfeder, Rachel L.; Hernandez-Lopez, Ana A.; Mattavelli, Marco; Berger, Bonnie

    2016-01-01

    This paper provides the specification and an initial validation of an evaluation framework for the comparison of lossy compressors of genome sequencing quality values. The goal is to define reference data, test sets, tools and metrics that shall be used to evaluate the impact of lossy compression of quality values on human genome variant calling. The functionality of the framework is validated referring to two state-of-the-art genomic compressors. This work has been spurred by the current act...

  15. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    Science.gov (United States)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  16. Design of a receiver operating characteristic (ROC) study of 10:1 lossy image compression

    Science.gov (United States)

    Collins, Cary A.; Lane, David; Frank, Mark S.; Hardy, Michael E.; Haynor, David R.; Smith, Donald V.; Parker, James E.; Bender, Gregory N.; Kim, Yongmin

    1994-04-01

    The digital archiving system at Madigan Army Medical Center (MAMC) uses a 10:1 lossy data compression algorithm for most forms of computed radiography. A systematic study on the potential effect of lossy image compression on patient care has been initiated with a series of studies focused on specific diagnostic tasks. The studies are based upon the receiver operating characteristic (ROC) method of analysis for diagnostic systems. The null hypothesis is that observer performance with approximately 10:1 compressed and decompressed images is not different from using original, uncompressed images for detecting subtle pathologic findings seen on computed radiographs of bone, chest, or abdomen, when viewed on a high-resolution monitor. Our design involves collecting cases from eight pathologic categories. Truth is determined by committee using confirmatory studies performed during routine clinical practice whenever possible. Software has been developed to aid in case collection and to allow reading of the cases for the study using stand-alone Siemens Litebox workstations. Data analysis uses two methods, ROC analysis and free-response ROC (FROC) methods. This study will be one of the largest ROC/FROC studies of its kind and could benefit clinical radiology practice using PACS technology. The study design and results from a pilot FROC study are presented.

  17. The study of diagnostic accuracy of chest nodules by using different compression methods

    International Nuclear Information System (INIS)

    Liang Zhigang; Kuncheng, L.I.; Zhang Jinghong; Liu Shuliang

    2005-01-01

    Background: The purpose of this study was to compare the diagnostic accuracy of small nodules in the chest by using different compression methods. Method: Two radiologists with 5 years experience twice interpreted 39 chest images by using lossless and lossy compression methods. The time interval was 3 weeks. Each time the radiologists interpreted one kind of compressed images. The image browser used the Unisight software provided by Atlastiger Company in Shanghai. The interpreting results were analyzed by the ROCKIT software and the ROC curves were painted by Excel 2002. Results: In studies of receiver operating characteristics for scoring the presence or absence of nodules, the images with lossy compression method showed no statistical difference as compared with the images with lossless compression method. Conclusion: The diagnostic accuracy of chest nodules by using the lossless and lossy compression methods had no significant difference, we could use the lossy compression method to transmit and archive the chest images with nodules

  18. An Evaluation Framework for Lossy Compression of Genome Sequencing Quality Values.

    Science.gov (United States)

    Alberti, Claudio; Daniels, Noah; Hernaez, Mikel; Voges, Jan; Goldfeder, Rachel L; Hernandez-Lopez, Ana A; Mattavelli, Marco; Berger, Bonnie

    2016-01-01

    This paper provides the specification and an initial validation of an evaluation framework for the comparison of lossy compressors of genome sequencing quality values. The goal is to define reference data, test sets, tools and metrics that shall be used to evaluate the impact of lossy compression of quality values on human genome variant calling. The functionality of the framework is validated referring to two state-of-the-art genomic compressors. This work has been spurred by the current activity within the ISO/IEC SC29/WG11 technical committee (a.k.a. MPEG), which is investigating the possibility of starting a standardization activity for genomic information representation.

  19. Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions

    Science.gov (United States)

    Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina

    2002-01-01

    OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.

  20. The effects of lossy compression on diagnostically relevant seizure information in EEG signals.

    Science.gov (United States)

    Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E

    2013-01-01

    This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.

  1. Recent advances in lossy compression of scientific floating-point data

    Science.gov (United States)

    Lindstrom, P.

    2017-12-01

    With a continuing exponential trend in supercomputer performance, ever larger data sets are being generated through numerical simulation. Bandwidth and storage capacity are, however, not keeping pace with this increase in data size, causing significant data movement bottlenecks in simulation codes and substantial monetary costs associated with archiving vast volumes of data. Worse yet, ever smaller fractions of data generated can be stored for further analysis, where scientists frequently rely on decimating or averaging large data sets in time and/or space. One way to mitigate these problems is to employ data compression to reduce data volumes. However, lossless compression of floating-point data can achieve only very modest size reductions on the order of 10-50%. We present ZFP and FPZIP, two state-of-the-art lossy compressors for structured floating-point data that routinely achieve one to two orders of magnitude reduction with little to no impact on the accuracy of visualization and quantitative data analysis. We provide examples of the use of such lossy compressors in climate and seismic modeling applications to effectively accelerate I/O and reduce storage requirements. We further discuss how the design decisions behind these and other compressors impact error distributions and other statistical and differential properties, including derived quantities of interest relevant to each science application.

  2. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    Science.gov (United States)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  3. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    Science.gov (United States)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress

  4. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  5. Robust steganographic method utilizing properties of MJPEG compression standard

    Directory of Open Access Journals (Sweden)

    Jakub Oravec

    2015-06-01

    Full Text Available This article presents design of steganographic method, which uses video container as cover data. Video track was recorded by webcam and was further encoded by compression standard MJPEG. Proposed method also takes in account effects of lossy compression. The embedding process is realized by switching places of transform coefficients, which are computed by Discrete Cosine Transform. The article contains possibilities, used techniques, advantages and drawbacks of chosen solution. The results are presented at the end of the article.

  6. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    Science.gov (United States)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  7. 2D-RBUC for efficient parallel compression of residuals

    Science.gov (United States)

    Đurđević, Đorđe M.; Tartalja, Igor I.

    2018-02-01

    In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.

  8. A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.

    Science.gov (United States)

    Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong

    2017-04-01

    This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.

  9. A singular-value method for reconstruction of nonradial and lossy objects.

    Science.gov (United States)

    Jiang, Wei; Astheimer, Jeffrey; Waag, Robert

    2012-03-01

    Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.

  10. Selectively Lossy, Lossless, and/or Error Robust Data Compression Method

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Lossless compression techniques provide efficient compression of hyperspectral satellite data. The present invention combines the advantages of a clustering with...

  11. Lossy image compression for digital medical imaging systems

    Science.gov (United States)

    Wilhelm, Paul S.; Haynor, David R.; Kim, Yongmin; Nelson, Alan C.; Riskin, Eve A.

    1990-07-01

    Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the originals and presents the results ofits application to four representative and promising compression methods. The methods examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit allocation. Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table. Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by the need for intensive computing to compress and decompress images. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024 CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9, 1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images on a Sony (1280 X 1024) monitor. The radiologists' subjective evaluations of image fidelity were compared to

  12. Correlation and image compression for limited-bandwidth CCD.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  13. Efficient lossy compression implementations of hyperspectral images: tools, hardware platforms, and comparisons

    Science.gov (United States)

    García, Aday; Santos, Lucana; López, Sebastián.; Callicó, Gustavo M.; Lopez, Jose F.; Sarmiento, Roberto

    2014-05-01

    Efficient onboard satellite hyperspectral image compression represents a necessity and a challenge for current and future space missions. Therefore, it is mandatory to provide hardware implementations for this type of algorithms in order to achieve the constraints required for onboard compression. In this work, we implement the Lossy Compression for Exomars (LCE) algorithm on an FPGA by means of high-level synthesis (HSL) in order to shorten the design cycle. Specifically, we use CatapultC HLS tool to obtain a VHDL description of the LCE algorithm from C-language specifications. Two different approaches are followed for HLS: on one hand, introducing the whole C-language description in CatapultC and on the other hand, splitting the C-language description in functional modules to be implemented independently with CatapultC, connecting and controlling them by an RTL description code without HLS. In both cases the goal is to obtain an FPGA implementation. We explain the several changes applied to the original Clanguage source code in order to optimize the results obtained by CatapultC for both approaches. Experimental results show low area occupancy of less than 15% for a SRAM-based Virtex-5 FPGA and a maximum frequency above 80 MHz. Additionally, the LCE compressor was implemented into an RTAX2000S antifuse-based FPGA, showing an area occupancy of 75% and a frequency around 53 MHz. All these serve to demonstrate that the LCE algorithm can be efficiently executed on an FPGA onboard a satellite. A comparison between both implementation approaches is also provided. The performance of the algorithm is finally compared with implementations on other technologies, specifically a graphics processing unit (GPU) and a single-threaded CPU.

  14. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    Science.gov (United States)

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  15. Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression

    Science.gov (United States)

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-08-01

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.

  16. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  17. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications.

    Science.gov (United States)

    Ma, JiaLi; Zhang, TanTan; Dong, MingChui

    2015-05-01

    This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

  18. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  19. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  20. Modeling of Lossy Inductance in Moving-Coil Loudspeakers

    DEFF Research Database (Denmark)

    Kong, Xiao-Peng; Agerkvist, Finn T.; Zeng, Xin-Wu

    2015-01-01

    The electrical impedance of moving-coil loudspeakers is dominated by the lossy inductance in high frequency range. Using the equivalent electrical circuit method, a new model for the lossy inductance based on separate functions for the magnitude and phase of the impedance is presented. The electr......The electrical impedance of moving-coil loudspeakers is dominated by the lossy inductance in high frequency range. Using the equivalent electrical circuit method, a new model for the lossy inductance based on separate functions for the magnitude and phase of the impedance is presented...

  1. A MODIFIED EMBEDDED ZERO-TREE WAVELET METHOD FOR MEDICAL IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    T. Celine Therese Jenny

    2010-11-01

    Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.

  2. New approach development for solution of cloning results detection problem in lossy saved digital image

    Directory of Open Access Journals (Sweden)

    A.A. Kobozeva

    2016-09-01

    Full Text Available The problem of detection of the digital image falsification results performed by cloning is considered – one of the most often used program tools implemented in all modern graphic editors. Aim: The aim of the work is further development of approach to the solution of a cloning detection problem having the cloned image saved in a lossy format, offered by authors earlier. Materials and Methods: Further development of a new approach to the solution of a problem of cloning results detection in the digital image is presented. Approach is based on the accounting of small changes of cylindrical body volume with the generatrix, that is parallel to the OZ axis, bounded above by the interpolating function plot for a matrix of brightness of the analyzed image, and bounded below by the XOY plane, during the compression process. Results: Adaptation of the offered approach to conditions of the cloned image compression with the arbitrary factor of compression quality is carried out (compression ratio. The approach solvency in the conditions of the cloned image compression according to the algorithms different from the JPEG standard is shown: JPEG2000, compression with use of low-rank approximations of the image matrix (matrix blocks. The results of computational experiment are given. It is shown that the developed approach can be used to detect the results of cloning in digital video in the conditions of lossy compression after cloning process.

  3. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  4. A new method for simplification and compression of 3D meshes

    OpenAIRE

    Attene, Marco

    2001-01-01

    We focus on the lossy compression of manifold triangle meshes. Our SwingWrapper approach partitions the surface of an original mesh M into simply-connected regions, called triangloids. We compute a new mesh M'. Each triangle of M' is a close approximation of a pseudo-triangle of M. By construction, the connectivity of M' is fairly regular and can be compressed to less than a bit per triangle using EdgeBreaker or one of the other recently developed schemes. The locations of the vertices of M' ...

  5. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  6. Image compression software for the SOHO LASCO and EIT experiments

    Science.gov (United States)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  7. A New Algorithm for the On-Board Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raúl Guerra

    2018-03-01

    Full Text Available Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA, is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.

  8. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  9. Effects on MR images compression in tissue classification quality

    International Nuclear Information System (INIS)

    Santalla, H; Meschino, G; Ballarin, V

    2007-01-01

    It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of 'quality' is essential. What we understand for 'quality'? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images

  10. High thermal conductivity lossy dielectric using co-densified multilayer configuration

    Science.gov (United States)

    Tiegs, Terry N.; Kiggans, Jr., James O.

    2003-06-17

    Systems and methods are described for loss dielectrics. A method of manufacturing a lossy dielectric includes providing at least one high dielectric loss layer and providing at least one high thermal conductivity-electrically insulating layer adjacent the at least one high dielectric loss layer and then densifying together. The systems and methods provide advantages because the lossy dielectrics are less costly and more environmentally friendly than the available alternatives.

  11. A Lossy Counting-Based State of Charge Estimation Method and Its Application to Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2015-12-01

    Full Text Available Estimating the residual capacity or state-of-charge (SoC of commercial batteries on-line without destroying them or interrupting the power supply, is quite a challenging task for electric vehicle (EV designers. Many Coulomb counting-based methods have been used to calculate the remaining capacity in EV batteries or other portable devices. The main disadvantages of these methods are the cumulative error and the time-varying Coulombic efficiency, which are greatly influenced by the operating state (SoC, temperature and current. To deal with this problem, we propose a lossy counting-based Coulomb counting method for estimating the available capacity or SoC. The initial capacity of the tested battery is obtained from the open circuit voltage (OCV. The charging/discharging efficiencies, used for compensating the Coulombic losses, are calculated by the lossy counting-based method. The measurement drift, resulting from the current sensor, is amended with the distorted Coulombic efficiency matrix. Simulations and experimental results show that the proposed method is both effective and convenient.

  12. Edge-based compression of cartoon-like images with homogeneous diffusion

    DEFF Research Database (Denmark)

    Mainberger, Markus; Bruhn, Andrés; Weickert, Joachim

    2011-01-01

    Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compressio...

  13. Integrated Circuit Interconnect Lines on Lossy Silicon Substrate with Finite Element Method

    OpenAIRE

    Sarhan M. Musa,; Matthew N. O. Sadiku

    2014-01-01

    The silicon substrate has a significant effect on the inductance parameter of a lossy interconnect line on integrated circuit. It is essential to take this into account in determining the transmission line electrical parameters. In this paper, a new quasi-TEM capacitance and inductance analysis of multiconductor multilayer interconnects is successfully demonstrated using finite element method (FEM). We specifically illustrate the electrostatic modeling of single and coupled in...

  14. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  15. Data compression considerations for detectors with local intelligence

    International Nuclear Information System (INIS)

    Garcia-Sciveres, M; Wang, X

    2014-01-01

    This note summarizes the outcome of discussions about how data compression considerations apply to tracking detectors with local intelligence. The method for analyzing data compression efficiency is taken from a previous publication and applied to module characteristics from the WIT2014 workshop. We explore local intelligence and coupled layer structures in the language of data compression. In this context the original intelligent tracker concept of correlating hits to find matches of interest and discard others is just a form of lossy data compression. We now explore how these features (intelligence and coupled layers) can be exploited for lossless compression, which could enable full readout at higher trigger rates than previously envisioned, or even triggerless

  16. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  17. Performance evaluation of emerging JPEGXR compression standard for medical images

    International Nuclear Information System (INIS)

    Basit, M.A.

    2012-01-01

    Medical images require loss less compression as a small error due to lossy compression may be considered as a diagnostic error. JPEG XR is the latest image compression standard designed for variety of applications and has a support for lossy and loss less modes. This paper provides in-depth performance evaluation of latest JPEGXR with existing image coding standards for medical images using loss less compression. Various medical images are used for evaluation and ten images of each organ are tested. Performance of JPEGXR is compared with JPEG2000 and JPEGLS using mean square error, peak signal to noise ratio, mean absolute error and structural similarity index. JPEGXR shows improvement of 20.73 dB and 5.98 dB over JPEGLS and JPEG2000 respectively for various test images used in experimentation. (author)

  18. A Posteriori Restoration of Block Transform-Compressed Data

    Science.gov (United States)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  19. Signal Compression in Automatic Ultrasonic testing of Rails

    Directory of Open Access Journals (Sweden)

    Tomasz Ciszewski

    2007-01-01

    Full Text Available Full recording of the most important information carried by the ultrasonic signals allows realizing statistical analysis of measurement data. Statistical analysis of the results gathered during automatic ultrasonic tests gives data which lead, together with use of features of measuring method, differential lossy coding and traditional method of lossless data compression (Huffman’s coding, dictionary coding, to a comprehensive, efficient data compression algorithm. The subject of the article is to present the algorithm and the benefits got by using it in comparison to alternative compression methods. Storage of large amount  of data allows to create an electronic catalogue of ultrasonic defects. If it is created, the future qualification system training in the new solutions of the automat for test in rails will be possible.

  20. Adiabatic passage for a lossy two-level quantum system by a complex time method

    International Nuclear Information System (INIS)

    Dridi, G; Guérin, S

    2012-01-01

    Using a complex time method with the formalism of Stokes lines, we establish a generalization of the Davis–Dykhne–Pechukas formula which gives in the adiabatic limit the transition probability of a lossy two-state system driven by an external frequency-chirped pulse-shaped field. The conditions that allow this generalization are derived. We illustrate the result with the dissipative Allen–Eberly and Rosen–Zener models. (paper)

  1. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    Science.gov (United States)

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  2. Lossless medical image compression with a hybrid coder

    Science.gov (United States)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  3. Medical Image Compression Based on Region of Interest, With Application to Colon CT Images

    National Research Council Canada - National Science Library

    Gokturk, Salih

    2001-01-01

    ...., in diagnostically important regions. This paper discusses a hybrid model of lossless compression in the region of interest, with high-rate, motion-compensated, lossy compression in other regions...

  4. Negative refraction of inhomogeneous waves in lossy isotropic media

    International Nuclear Information System (INIS)

    Fedorov, V Yu; Nakajima, T

    2014-01-01

    We theoretically study negative refraction of inhomogeneous waves at the interface of lossy isotropic media. We obtain explicit (up to the sign) expressions for the parameters of a wave transmitted through the interface between two lossy media characterized by complex permittivity and permeability. We show that the criterion of negative refraction that requires negative permittivity and permeability can be used only in the case of a homogeneous incident wave at the interface between a lossless and lossy media. In a more general situation, when the incident wave is inhomogeneous, or both media are lossy, the criterion of negative refraction becomes dependent on an incidence angle. Most interestingly, we show that negative refraction can be realized in conventional lossy materials (such as metals) if their interfaces are properly oriented. (paper)

  5. Data compression techniques and the ACR-NEMA digital interface communications standard

    International Nuclear Information System (INIS)

    Zielonka, J.S.; Blume, H.; Hill, D.; Horil, S.C.; Lodwick, G.S.; Moore, J.; Murphy, L.L.; Wake, R.; Wallace, G.

    1987-01-01

    Data compression offers the possibility of achieving high, effective information transfer rates between devices and of efficient utilization of digital storge devices in meeting department-wide archiving needs. Accordingly, the ARC-NEMA Digital Imaging and Communications Standards Committee established a Working Group to develop a means to incorporate the optimal use of a wide variety of current compression techniques while remaining compatible with the standard. This proposed method allows the use of public domain techniques, predetermined methods between devices already aware of the selected algorithm, and the ability for the originating device to specify algorithms and parameters prior to transmitting compressed data. Because of the latter capability, the technique has the potential for supporting many compression algorithms not yet developed or in common use. Both lossless and lossy methods can be implemented. In addition to description of the overall structure of this proposal, several examples using current compression algorithms are given

  6. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  7. Optimal design of lossy bandgap structures

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard

    2004-01-01

    The method of topology optimization is used to design structures for wave propagation with one lossy material component. Optimized designs for scalar elastic waves are presented for mininimum wave transmission as well as for maximum wave energy dissipation. The structures that are obtained...... are of the 1D or 2D bandgap type depending on the objective and the material parameters....

  8. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  9. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  10. Resolution limits of migration and linearized waveform inversion images in a lossy medium

    KAUST Repository

    Schuster, Gerard T.; Dutta, Gaurav; Li, Jing

    2017-01-01

    The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.

  11. Resolution limits of migration and linearized waveform inversion images in a lossy medium

    KAUST Repository

    Schuster, Gerard T.

    2017-03-10

    The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.

  12. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  13. Charge accumulation in lossy dielectrics: a review

    DEFF Research Database (Denmark)

    Rasmussen, Jørgen Knøster; McAllister, Iain Wilson; Crichton, George C

    1999-01-01

    At present, the phenomenon of charge accumulation in solid dielectrics is under intense experimental study. Using a field theoretical approach, we review the basis for charge accumulation in lossy dielectrics. Thereafter, this macroscopic approach is applied to planar geometries such that the mat......At present, the phenomenon of charge accumulation in solid dielectrics is under intense experimental study. Using a field theoretical approach, we review the basis for charge accumulation in lossy dielectrics. Thereafter, this macroscopic approach is applied to planar geometries...

  14. Analysis of the temporal electric fields in lossy dielectric media

    DEFF Research Database (Denmark)

    McAllister, Iain Wilson; Crichton, George C

    1991-01-01

    The time-dependent electric fields associated with lossy dielectric media are examined. The analysis illustrates that, with respect to the basic time constant, these lossy media can take a considerable time to attain a steady-state condition. Time-dependent field enhancement factors are considered......, and inherent surface-charge densities quantified. The calculation of electrostatic forces on a free, lossy dielectric particle is illustrated. An extension to the basic analysis demonstrates that, on reversal of polarity, the resultant tangential field at the interface could play a decisive role...

  15. Analysis of tractable distortion metrics for EEG compression applications

    International Nuclear Information System (INIS)

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Cárdenas-Barrera, Julián

    2012-01-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio. (paper)

  16. LOW COMPLEXITY HYBRID LOSSY TO LOSSLESS IMAGE CODER WITH COMBINED ORTHOGONAL POLYNOMIALS TRANSFORM AND INTEGER WAVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    R. Krishnamoorthy

    2012-05-01

    Full Text Available In this paper, a new lossy to lossless image coding scheme combined with Orthogonal Polynomials Transform and Integer Wavelet Transform is proposed. The Lifting Scheme based Integer Wavelet Transform (LS-IWT is first applied on the image in order to reduce the blocking artifact and memory demand. The Embedded Zero tree Wavelet (EZW subband coding algorithm is used in this proposed work for progressive image coding which achieves efficient bit rate reduction. The computational complexity of lower subband coding of EZW algorithm is reduced in this proposed work with a new integer based Orthogonal Polynomials transform coding. The normalization and mapping are done on the subband of the image for exploiting the subjective redundancy and the zero tree structure is obtained for EZW coding and so the computation complexity is greatly reduced in this proposed work. The experimental results of the proposed technique also show that the efficient bit rate reduction is achieved for both lossy and lossless compression when compared with existing techniques.

  17. Boiler: lossy compression of RNA-seq alignments using coverage vectors.

    Science.gov (United States)

    Pritt, Jacob; Langmead, Ben

    2016-09-19

    We describe Boiler, a new software tool for compressing and querying large collections of RNA-seq alignments. Boiler discards most per-read data, keeping only a genomic coverage vector plus a few empirical distributions summarizing the alignments. Since most per-read data is discarded, storage footprint is often much smaller than that achieved by other compression tools. Despite this, the most relevant per-read data can be recovered; we show that Boiler compression has only a slight negative impact on results given by downstream tools for isoform assembly and quantification. Boiler also allows the user to pose fast and useful queries without decompressing the entire file. Boiler is free open source software available from github.com/jpritt/boiler. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Lossy/lossless coding of bi-level images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1997-01-01

    Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template...... as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy `rate...

  19. Three dimensional range geometry and texture data compression with space-filling curves.

    Science.gov (United States)

    Chen, Xia; Zhang, Song

    2017-10-16

    This paper presents a novel method to effectively store three-dimensional (3D) data and 2D texture data into a regular 24-bit image. The proposed method uses the Hilbert space-filling curve to map the normalized unwrapped phase map to two 8-bit color channels, and saves the third color channel for 2D texture storage. By further leveraging existing 2D image and video compression techniques, the proposed method can achieve high compression ratios while effectively preserving data quality. Since the encoding and decoding processes can be applied to most of the current 2D media platforms, this proposed compression method can make 3D data storage and transmission available for many electrical devices without requiring special hardware changes. Experiments demonstrate that if a lossless 2D image/video format is used, both original 3D geometry and 2D color texture can be accurately recovered; if lossy image/video compression is used, only black-and-white or grayscale texture can be properly recovered, but much higher compression ratios (e.g., 1543:1 against the ASCII OBJ format) are achieved with slight loss of 3D geometry quality.

  20. Effects of Different Compression Techniques on Diagnostic Accuracies of Breast Masses on Digitized Mammograms

    International Nuclear Information System (INIS)

    Zhigang Liang; Xiangying Du; Jiabin Liu; Yanhui Yang; Dongdong Rong; Xinyu Y ao; Kuncheng Li

    2008-01-01

    Background: The JPEG 2000 compression technique has recently been introduced into the medical imaging field. It is critical to understand the effects of this technique on the detection of breast masses on digitized images by human observers. Purpose: To evaluate whether lossless and lossy techniques affect the diagnostic results of malignant and benign breast masses on digitized mammograms. Material and Methods: A total of 90 screen-film mammograms including craniocaudal and lateral views obtained from 45 patients were selected by two non-observing radiologists. Of these, 22 cases were benign lesions and 23 cases were malignant. The mammographic films were digitized by a laser film digitizer, and compressed to three levels (lossless and lossy 20:1 and 40:1) using the JPEG 2000 wavelet-based image compression algorithm. Four radiologists with 10-12 years' experience in mammography interpreted the original and compressed images. The time interval was 3 weeks for each reading session. A five-point malignancy scale was used, with a score of 1 corresponding to definitely not a malignant mass, a score of 2 referring to not a malignant mass, a score of 3 meaning possibly a malignant mass, a score of 4 being probably a malignant mass, and a score of 5 interpreted as definitely a malignant mass. The radiologists' performance was evaluated using receiver operating characteristic analysis. Results: The average Az values for all radiologists decreased from 0.8933 for the original uncompressed images to 0.8299 for the images compressed at 40:1. This difference was not statistically significant. The detection accuracy of the original images was better than that of the compressed images, and the Az values decreased with increasing compression ratio. Conclusion: Digitized mammograms compressed at 40:1 could be used to substitute original images in the diagnosis of breast cancer

  1. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  2. Algorithm for Compressing Time-Series Data

    Science.gov (United States)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  3. Fundamental study of compression for movie files of coronary angiography

    Science.gov (United States)

    Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie

    2005-04-01

    When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.

  4. On the Separation of Quantum Noise for Cardiac X-Ray Image Compression

    NARCIS (Netherlands)

    de Bruijn, F.J.; Slump, Cornelis H.

    1996-01-01

    In lossy medical image compression, the requirements for the preservation of diagnostic integrity cannot be easily formulated in terms of a perceptual model. Especially, since human visual perception is dependent on numerous factors such as the viewing conditions and psycho-visual factors.

  5. Fractal Image Compression Based on High Entropy Values Technique

    Directory of Open Access Journals (Sweden)

    Douaa Younis Abbaas

    2018-04-01

    Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.

  6. A lossy graph model for delay reduction in generalized instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2014-06-01

    The problem of minimizing the decoding delay in Generalized instantly decodable network coding (G-IDNC) for both perfect and lossy feedback scenarios is formulated as a maximum weight clique problem over the G-IDNC graph in. In this letter, we introduce a new lossy G-IDNC graph (LG-IDNC) model to further minimize the decoding delay in lossy feedback scenarios. Whereas the G-IDNC graph represents only doubtless combinable packets, the LG-IDNC graph represents also uncertain packet combinations, arising from lossy feedback events, when the expected decoding delay of XORing them among themselves or with other certain packets is lower than that expected when sending these packets separately. We compare the decoding delay performance of LG-IDNC and G-IDNC graphs through extensive simulations. Numerical results show that our new LG-IDNC graph formulation outperforms the G-IDNC graph formulation in all lossy feedback situations and achieves significant improvement in the decoding delay especially when the feedback erasure probability is higher than the packet erasure probability. © 2012 IEEE.

  7. CFOA-Based Lossless and Lossy Inductance Simulators

    Directory of Open Access Journals (Sweden)

    F. Kaçar

    2011-09-01

    Full Text Available Inductance simulator is a useful component in the circuit synthesis theory especially for analog signal processing applications such as filter, chaotic oscillator design, analog phase shifters and cancellation of parasitic element. In this study, new four inductance simulator topologies employing a single current feedback operational amplifier are presented. The presented topologies require few passive components. The first topology is intended for negative inductance simulation, the second topology is for lossy series inductance, the third one is for lossy parallel inductance and the fourth topology is for negative parallel (-R (-L (-C simulation. The performance of the proposed CFOA based inductance simulators is demonstrated on both a second-order low-pass filter and inductance cancellation circuit. PSPICE simulations are given to verify the theoretical analysis.

  8. Quality Evaluation and Nonuniform Compression of Geometrically Distorted Images Using the Quadtree Distortion Map

    Directory of Open Access Journals (Sweden)

    Cristina Costa

    2004-09-01

    Full Text Available The paper presents an analysis of the effects of lossy compression algorithms applied to images affected by geometrical distortion. It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel. A distortion measure named quadtree distortion map (QDM able to quantify this aspect is proposed. Furthermore, QDM is exploited to achieve adaptive compression of geometrically distorted pictures, in order to ensure a uniform quality on the final image. Tests are performed using JPEG and JPEG2000 coding standards in order to quantitatively and qualitatively assess the performance of the proposed method.

  9. MP3 compression of Doppler ultrasound signals.

    Science.gov (United States)

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  10. Time domain analysis of thin-wire antennas over lossy ground using the reflection-coefficient approximation

    Science.gov (United States)

    FernáNdez Pantoja, M.; Yarovoy, A. G.; Rubio Bretones, A.; GonzáLez GarcíA, S.

    2009-12-01

    This paper presents a procedure to extend the methods of moments in time domain for the transient analysis of thin-wire antennas to include those cases where the antennas are located over a lossy half-space. This extended technique is based on the reflection coefficient (RC) approach, which approximates the fields incident on the ground interface as plane waves and calculates the time domain RC using the inverse Fourier transform of Fresnel equations. The implementation presented in this paper uses general expressions for the RC which extend its range of applicability to lossy grounds, and is proven to be accurate and fast for antennas located not too near to the ground. The resulting general purpose procedure, able to treat arbitrarily oriented thin-wire antennas, is appropriate for all kind of half-spaces, including lossy cases, and it has turned out to be as computationally fast solving the problem of an arbitrary ground as dealing with a perfect electric conductor ground plane. Results show a numerical validation of the method for different half-spaces, paying special attention to the influence of the antenna to ground distance in the accuracy of the results.

  11. A compressive sensing-based computational method for the inversion of wide-band ground penetrating radar data

    Science.gov (United States)

    Gelmini, A.; Gottardi, G.; Moriyama, T.

    2017-10-01

    This work presents an innovative computational approach for the inversion of wideband ground penetrating radar (GPR) data. The retrieval of the dielectric characteristics of sparse scatterers buried in a lossy soil is performed by combining a multi-task Bayesian compressive sensing (MT-BCS) solver and a frequency hopping (FH) strategy. The developed methodology is able to benefit from the regularization capabilities of the MT-BCS as well as to exploit the multi-chromatic informative content of GPR measurements. A set of numerical results is reported in order to assess the effectiveness of the proposed GPR inverse scattering technique, as well as to compare it to a simpler single-task implementation.

  12. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  13. Edge-Based Image Compression with Homogeneous Diffusion

    Science.gov (United States)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  14. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1999-01-01

    We present general and unified algorithms for lossy/lossless coding of bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general...... to the specialized soft pattern matching techniques which work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless......, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bi-level images....

  15. Development of the town Viljandi in light of the studies at Lossi street / Eero Heinloo

    Index Scriptorium Estoniae

    Heinloo, Eero

    2015-01-01

    Uuringud näitasid, et varaseim püsiasustusele viitav ladestus on ulatunud Lossi ja Kauba tänavate ristist kuni Lossi tn 3 ja 4 hoonete vahelise alani. Hetkeseisus ei saa püsiasustuse algust dateerida varasemaks kui 13. sajandi keskpaik. Lossi tänava kujunemise võib dateerida 1270. või 1280. aastatesse. Hiljemalt 14. sajandi II veerandil on tänava puitsillutis asendatud munakivisillutisega.

  16. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1997-01-01

    We present general and unified algorithms for lossy/lossless codingof bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general....... Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG......-2, an emerging international standard for lossless/lossy compression of bi-level images....

  17. Contributions to HEVC Prediction for Medical Image Compression

    OpenAIRE

    Guarda, André Filipe Rodrigues

    2016-01-01

    Medical imaging technology and applications are continuously evolving, dealing with images of increasing spatial and temporal resolutions, which allow easier and more accurate medical diagnosis. However, this increase in resolution demands a growing amount of data to be stored and transmitted. Despite the high coding efficiency achieved by the most recent image and video coding standards in lossy compression, they are not well suited for quality-critical medical image compressi...

  18. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  19. Cloud solution for histopathological image analysis using region of interest based compression.

    Science.gov (United States)

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  20. Compressing climate model simulations: reducing storage burden while preserving information

    Science.gov (United States)

    Hammerling, Dorit; Baker, Allison; Xu, Haiying; Clyne, John; Li, Samuel

    2017-04-01

    Climate models, which are run at high spatial and temporal resolutions, generate massive quantities of data. As our computing capabilities continue to increase, storing all of the generated data is becoming a bottleneck, which negatively affects scientific progress. It is thus important to develop methods for representing the full datasets by smaller compressed versions, which still preserve all the critical information and, as an added benefit, allow for faster read and write operations during analysis work. Traditional lossy compression algorithms, as for example used for image files, are not necessarily ideally suited for climate data. While visual appearance is relevant, climate data has additional critical features such as the preservation of extreme values and spatial and temporal gradients. Developing alternative metrics to quantify information loss in a manner that is meaningful to climate scientists is an ongoing process still in its early stages. We will provide an overview of current efforts to develop such metrics to assess existing algorithms and to guide the development of tailored compression algorithms to address this pressing challenge.

  1. Solving for the capacity of a noisy lossy bosonic channel via the master equation

    International Nuclear Information System (INIS)

    Qin Tao; Zhao Meisheng; Zhang Yongde

    2006-01-01

    We discuss the noisy lossy bosonic channel by exploiting master equations. The capacity of the noisy lossy bosonic channel and the criterion for the optimal capacities are derived. Consequently, we verify that master equations can be a tool to study bosonic channels

  2. Rembrandt Kadrioru lossis / Jüri Hain

    Index Scriptorium Estoniae

    Hain, Jüri, 1941-

    2000-01-01

    Kadrioru lossi kuppelsaali tundmatu meistri laemaali maalimise ajalugu. Maali aluseks on Rembrandti maal "Diana suplus". Laemaal on maalitud Magdalena de Passe (1600-1638) vasegravüüri põhjal, mis on tehtud Rembrandti maalist. Rembrandt on lähtunud Antonio Tempesta (1555-1630) kahest gravüürist, viimane on lähtunud Otto van Veeni (1556-1629) maalist

  3. Compression of TPC data in the ALICE experiment

    International Nuclear Information System (INIS)

    Nicolaucig, A.; Mattavelli, M.; Carrato, S.

    2002-01-01

    In this paper two algorithms for the compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN are described. The first algorithm is based on a lossless source code modeling technique, i.e. the original TPC signal information can be reconstructed without errors at the decompression stage. The source model exploits the temporal correlation that is present in the TPC data to reduce the entropy of the source. The second algorithm is based on a source model which is lossy if samples of the TPC signal are considered one by one. Conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse. Obviously entropy coding is applied to the set of events defined by the two source models to reduce the bit rate to the corresponding source entropy. Using TPC simulated data according to the expected ALICE TPC performance, the lossless and the lossy compression algorithms achieve a data reduction, respectively, to 49.2% and in the range of 34.2% down to 23.7% of the original data rate. The number of operations per input symbol required to implement the compression stage for both algorithms is relatively low, so that a real-time implementation embedded in the TPC data acquisition chain using low-cost integrated electronics is a realistic option to effectively reduce the data storing cost of ALICE experiment

  4. MULTISTAGE BITRATE REDUCTION IN ABSOLUTE MOMENT BLOCK TRUNCATION CODING FOR IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Vimala

    2012-05-01

    Full Text Available Absolute Moment Block Truncation Coding (AMBTC is one of the lossy image compression techniques. The computational complexity involved is less and the quality of the reconstructed images is appreciable. The normal AMBTC method requires 2 bits per pixel (bpp. In this paper, two novel ideas have been incorporated as part of AMBTC method to improve the coding efficiency. Generally, the quality degrades with the reduction in the bit-rate. But in the proposed method, the quality of the reconstructed image increases with the decrease in the bit-rate. The proposed method has been tested with standard images like Lena, Barbara, Bridge, Boats and Cameraman. The results obtained are better than that of the existing AMBTC method in terms of bit-rate and the quality of the reconstructed images.

  5. Quantum optics of lossy asymmetric beam splitters

    NARCIS (Netherlands)

    Uppu, Ravitej; Wolterink, Tom; Tentrup, Tristan Bernhard Horst; Pinkse, Pepijn Willemszoon Harry

    2016-01-01

    We theoretically investigate quantum interference of two single photons at a lossy asymmetric beam splitter, the most general passive 2×2 optical circuit. The losses in the circuit result in a non-unitary scattering matrix with a non-trivial set of constraints on the elements of the scattering

  6. A new method for robust video watermarking resistant against key estimation attacks

    Science.gov (United States)

    Mitekin, Vitaly

    2015-12-01

    This paper presents a new method for high-capacity robust digital video watermarking and algorithms of embedding and extraction of watermark based on this method. Proposed method uses password-based two-dimensional pseudonoise arrays for watermark embedding, making brute-force attacks aimed at steganographic key retrieval mostly impractical. Proposed algorithm for 2-dimensional "noise-like" watermarking patterns generation also allows to significantly decrease watermark collision probability ( i.e. probability of correct watermark detection and extraction using incorrect steganographic key or password).. Experimental research provided in this work also shows that simple correlation-based watermark detection procedure can be used, providing watermark robustness against lossy compression and watermark estimation attacks. At the same time, without decreasing robustness of embedded watermark, average complexity of the brute-force key retrieval attack can be increased to 1014 watermark extraction attempts (compared to 104-106 for a known robust watermarking schemes). Experimental results also shows that for lowest embedding intensity watermark preserves it's robustness against lossy compression of host video and at the same time preserves higher video quality (PSNR up to 51dB) compared to known wavelet-based and DCT-based watermarking algorithms.

  7. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  8. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  9. Privacy of a lossy bosonic memory channel

    Energy Technology Data Exchange (ETDEWEB)

    Ruggeri, Giovanna [Dipartimento di Fisica, Universita di Lecce, I-73100 Lecce (Italy)]. E-mail: ruggeri@le.infn.it; Mancini, Stefano [Dipartimento di Fisica, Universita di Camerino, I-62032 Camerino (Italy)]. E-mail: stefano.mancini@unicam.it

    2007-03-12

    We study the security of the information transmission between two honest parties realized through a lossy bosonic memory channel when losses are captured by a dishonest party. We then show that entangled inputs can enhance the private information of such a channel, which however does never overcome that of unentangled inputs in absence of memory.

  10. Privacy of a lossy bosonic memory channel

    International Nuclear Information System (INIS)

    Ruggeri, Giovanna; Mancini, Stefano

    2007-01-01

    We study the security of the information transmission between two honest parties realized through a lossy bosonic memory channel when losses are captured by a dishonest party. We then show that entangled inputs can enhance the private information of such a channel, which however does never overcome that of unentangled inputs in absence of memory

  11. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    Directory of Open Access Journals (Sweden)

    Daniel Laney

    2014-01-01

    Full Text Available This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. We compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.

  12. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    Science.gov (United States)

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  13. Time domain analysis of thin-wire antennas over lossy ground using the reflection-coefficient approximation

    NARCIS (Netherlands)

    Fernández Pantoja, M.; Yarovoy, A.G.; Rubio Bretones, A.; González García, S.

    2009-01-01

    This paper presents a procedure to extend the methods of moments in time domain for the transient analysis of thin-wire antennas to include those cases where the antennas are located over a lossy half-space. This extended technique is based on the reflection coefficient (RC) approach, which

  14. Partially blind instantly decodable network codes for lossy feedback environment

    KAUST Repository

    Sorour, Sameh

    2014-09-01

    In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. When feedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packet combinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend the perfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations. For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partially blind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.

  15. Lossy/Lossless Floating/Grounded Inductance Simulation Using One DDCC

    Directory of Open Access Journals (Sweden)

    M. A. Ibrahim

    2012-04-01

    Full Text Available In this work, we present new topologies for realizing one lossless grounded inductor and two floating, one lossless and one lossy, inductors employing a single differential difference current conveyor (DDCC and a minimum number of passive components, two resistors, and one grounded capacitor. The floating inductors are based on ordinary dual-output differential difference current conveyor (DO-DDCC while the grounded lossless inductor is based one a modified dual-output differential difference current conveyor (MDO-DDCC. The proposed lossless floating inductor is obtained from the lossy one by employing a negative impedance converter (NIC. The non-ideality effects of the active element on the simulated inductors are investigated. To demonstrate the performance of the proposed grounded inductance simulator as an example, it is used to construct a parallel resonant circuit. SPICE simulation results are given to confirm the theoretical analysis.

  16. Dyakonov surface waves in lossy metamaterials

    OpenAIRE

    Sorní Laserna, Josep; Naserpour, Mahin; Zapata Rodríguez, Carlos Javier; Miret Marí, Juan José

    2015-01-01

    We analyze the existence of localized waves in the vicinities of the interface between two dielectrics, provided one of them is uniaxial and lossy. We found two families of surface waves, one of them approaching the well-known Dyakonov surface waves (DSWs). In addition, a new family of wave fields exists which are tightly bound to the interface. Although its appearance is clearly associated with the dissipative character of the anisotropic material, the characteristic propagation length of su...

  17. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  18. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-01-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data. (author)

  19. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    Science.gov (United States)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  20. Security of modified Ping-Pong protocol in noisy and lossy channel.

    Science.gov (United States)

    Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2014-05-12

    The "Ping-Pong" (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove the security of this modified PP protocol against collective attacks when the noisy and lossy channel is taken into account. Simulation results show that our protocol is practical.

  1. A microcontroller-based microwave free-space measurement system for permittivity determination of lossy liquid materials.

    Science.gov (United States)

    Hasar, U C

    2009-05-01

    A microcontroller-based noncontact and nondestructive microwave free-space measurement system for real-time and dynamic determination of complex permittivity of lossy liquid materials has been proposed. The system is comprised of two main sections--microwave and electronic. While the microwave section provides for measuring only the amplitudes of reflection coefficients, the electronic section processes these data and determines the complex permittivity using a general purpose microcontroller. The proposed method eliminates elaborate liquid sample holder preparation and only requires microwave components to perform reflection measurements from one side of the holder. In addition, it explicitly determines the permittivity of lossy liquid samples from reflection measurements at different frequencies without any knowledge on sample thickness. In order to reduce systematic errors in the system, we propose a simple calibration technique, which employs simple and readily available standards. The measurement system can be a good candidate for industrial-based applications.

  2. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    Science.gov (United States)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that

  3. Bit-Grooming: Shave Your Bits with Razor-sharp Precision

    Science.gov (United States)

    Zender, C. S.; Silver, J.

    2017-12-01

    Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.

  4. An analytical look at the effects of compression on medical images

    OpenAIRE

    Persons, Kenneth; Palisson, Patrice; Manduca, Armando; Erickson, Bradley J.; Savcenko, Vladimir

    1997-01-01

    This article will take an analytical look at how lossy Joint Photographic Experts Group (JPEG) and wavelet image compression techniques affect medical image content. It begins with a brief explanation of how the JPEG and wavelet algorithms work, and describes in general terms what effect they can have on image quality (removal of noise, blurring, and artifacts). It then focuses more specifically on medical image diagnostic content and explains why subtle pathologies, that may be difficult for...

  5. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    Science.gov (United States)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  6. Partially blind instantly decodable network codes for lossy feedback environment

    KAUST Repository

    Sorour, Sameh; Douik, Ahmed S.; Valaee, Shahrokh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2014-01-01

    an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both

  7. Compression of digital images in radiology. Results of a consensus conference; Kompression digitaler Bilddaten in der Radiologie. Ergebnisse einer Konsensuskonferenz

    Energy Technology Data Exchange (ETDEWEB)

    Loose, R. [Klinikum Nuernberg-Nord (Germany). Inst. fuer Diagnostische und Interventionelle Radiologie; Braunschweig, R. [BG Kliniken Bergmannstrost, Halle/Saale (Germany). Klinik fuer Bildgebende Diagnostik und Interventionsradiologie; Kotter, E. [Universitaetsklinikum Freiburg (Germany). Abt. Roentgendiagnostik; Mildenberger, P. [Mainz Univ. (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Simmler, R.; Wucherer, M. [Klinikum Nuernberg (Germany). Inst. fuer Medizinische Physik

    2009-01-15

    Purpose: Recommendations for lossy compression of digital radiological DICOM images in Germany by means of a consensus conference. The compression of digital radiological images was evaluated in many studies. Even though the results demonstrate full diagnostic image quality of modality-dependent compression between 1:5 and 1:200, there are only a few clinical applications. Materials and Methods: A consensus conference with approx. 80 interested participants (radiology, industry, physics, and agencies) without individual invitation was organized by the working groups AGIT and APT of the German Roentgen Society DRG to determine compression factors without loss of diagnostic image quality for different anatomical regions for CT, CR/DR, MR, RF/XA examinations. The consent level was specified as at least 66 %. Results: For individual modalities the following compression factors were recommended: CT (brain) 1:5, CT (all other applications) 1:8, CR/DR (all applications except mammography) 1:10, CR/DR (mammography) 1:15, MR (all applications) 1:7, RF/XA (fluoroscopy, DSA, cardiac angio) 1:6. The recommended compression ratios are valid for JPEG and JPEG 2000 /Wavelet compressions. Conclusion: The results may be understood as recommendations and indicate limits of compression factors with no expected reduction of diagnostic image quality. They are similar to the current national recommendations for Canada and England. (orig.)

  8. Logarithmic compression methods for spectral data

    Science.gov (United States)

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  9. Possibility of perfect concealment by lossy conventional and lossy metamaterial cylindrical invisibility cloaks

    Science.gov (United States)

    Dehbashi, Reza; Shahabadi, Mahmoud

    2013-12-01

    The commonly used coordinate transformation for cylindrical cloaks is generalized. This transformation is utilized to determine an anisotropic inhomogeneous diagonal material tensors of a shell type cloak for various material types, i.e., double-positive (DPS: ɛ, μ > 0), double-negative (DNG: ɛ, μ cloaking for various material types, a rigorous analysis is performed. It is shown that perfect cloaking will be achieved for same type material for the cloak and its surrounding medium. Moreover, material losses are included in the analysis to demonstrate that perfect cloaking for lossy materials can be achieved for identical loss tangent of the cloak and its surrounding material. Sensitivity of the cloaking performance to losses for different material types is also investigated. The obtained analytical results are verified using a Finite-Element computational analysis.

  10. KungFQ: a simple and powerful approach to compress fastq files.

    Science.gov (United States)

    Grassi, Elena; Di Gregorio, Federico; Molineris, Ivan

    2012-01-01

    Nowadays storing data derived from deep sequencing experiments has become pivotal and standard compression algorithms do not exploit in a satisfying manner their structure. A number of reference-based compression algorithms have been developed but they are less adequate when approaching new species without fully sequenced genomes or nongenomic data. We developed a tool that takes advantages of fastq characteristics and encodes them in a binary format optimized in order to be further compressed with standard tools (such as gzip or lzma). The algorithm is straightforward and does not need any external reference file, it scans the fastq only once and has a constant memory requirement. Moreover, we added the possibility to perform lossy compression, losing some of the original information (IDs and/or qualities) but resulting in smaller files; it is also possible to define a quality cutoff under which corresponding base calls are converted to N. We achieve 2.82 to 7.77 compression ratios on various fastq files without losing information and 5.37 to 8.77 losing IDs, which are often not used in common analysis pipelines. In this paper, we compare the algorithm performance with known tools, usually obtaining higher compression levels.

  11. Optimization of wavelet decomposition for image compression and feature preservation.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  12. Tree-structured vector quantization of CT chest scans: Image quality and diagnostic accuracy

    International Nuclear Information System (INIS)

    Cosman, P.C.; Tseng, C.; Gray, R.M.; Olshen, R.A.; Moses, L.E.; Davidson, H.C.; Bergin, C.J.; Riskin, E.A.

    1993-01-01

    The quality of lossy compressed images is often characterized by signal-to-noise ratios, informal tests of subjective quality, or receiver operating characteristic (ROC) curves that include subjective appraisals of the value of an image for a particular application. The authors believe that for medical applications, lossy compressed images should be judged by a more natural and fundamental aspect of relative image quality: their use in making accurate diagnoses. They apply a lossy compression algorithm to medical images, and quantify the quality of the images by the diagnostic performance of radiologists, as well as by traditional signal-to-noise ratios and subjective ratings. The study is unlike previous studies of the effects of lossy compression in that they consider non-binary detection tasks, simulate actual diagnostic practice instead of using paired tests or confidence rankings, use statistical methods that are more appropriate for non-binary clinical data than are the popular ROC curves, and use low-complexity predictive tree-structured vector quantization for compression rather than DCT-based transform codes combined with entropy coding. Their diagnostic tasks are the identification of nodules (tumors) in the lungs and lymphadenopathy in the mediastinum from computerized tomography (CT) chest scans. For the image modality, compression algorithm, and diagnostic tasks they consider, the original 12 bit per pixel (bpp) CT image can be compressed to between 1 bpp and 2 bpp with no significant changes in diagnostic accuracy

  13. Application of PDF methods to compressible turbulent flows

    Science.gov (United States)

    Delarue, B. J.; Pope, S. B.

    1997-09-01

    A particle method applying the probability density function (PDF) approach to turbulent compressible flows is presented. The method is applied to several turbulent flows, including the compressible mixing layer, and good agreement is obtained with experimental data. The PDF equation is solved using a Lagrangian/Monte Carlo method. To accurately account for the effects of compressibility on the flow, the velocity PDF formulation is extended to include thermodynamic variables such as the pressure and the internal energy. The mean pressure, the determination of which has been the object of active research over the last few years, is obtained directly from the particle properties. It is therefore not necessary to link the PDF solver with a finite-volume type solver. The stochastic differential equations (SDE) which model the evolution of particle properties are based on existing second-order closures for compressible turbulence, limited in application to low turbulent Mach number flows. Tests are conducted in decaying isotropic turbulence to compare the performances of the PDF method with the Reynolds-stress closures from which it is derived, and in homogeneous shear flows, at which stage comparison with direct numerical simulation (DNS) data is conducted. The model is then applied to the plane compressible mixing layer, reproducing the well-known decrease in the spreading rate with increasing compressibility. It must be emphasized that the goal of this paper is not as much to assess the performance of models of compressibility effects, as it is to present an innovative and consistent PDF formulation designed for turbulent inhomogeneous compressible flows, with the aim of extending it further to deal with supersonic reacting flows.

  14. A measurement method for piezoelectric material properties under longitudinal compressive stress–-a compression test method for thin piezoelectric materials

    International Nuclear Information System (INIS)

    Kang, Lae-Hyong; Lee, Dae-Oen; Han, Jae-Hung

    2011-01-01

    We introduce a new compression test method for piezoelectric materials to investigate changes in piezoelectric properties under the compressive stress condition. Until now, compression tests of piezoelectric materials have been generally conducted using bulky piezoelectric ceramics and pressure block. The conventional method using the pressure block for thin piezoelectric patches, which are used in unimorph or bimorph actuators, is prone to unwanted bending and buckling. In addition, due to the constrained boundaries at both ends, the observed piezoelectric behavior contains boundary effects. In order to avoid these problems, the proposed method employs two guide plates with initial longitudinal tensile stress. By removing the tensile stress after bonding a piezoelectric material between the guide layers, longitudinal compressive stress is induced in the piezoelectric layer. Using the compression test specimens, two important properties, which govern the actuation performance of the piezoelectric material, the piezoelectric strain coefficients and the elastic modulus, are measured to evaluate the effects of applied electric fields and re-poling. The results show that the piezoelectric strain coefficient d 31 increases and the elastic modulus decreases when high voltage is applied to PZT5A, and the compression in the longitudinal direction decreases the piezoelectric strain coefficient d 31 but does not affect the elastic modulus. We also found that the re-poling of the piezoelectric material increases the elastic modulus, but the piezoelectric strain coefficient d 31 is not changed much (slightly increased) by re-poling

  15. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  16. Superplastic boronizing of duplex stainless steel under dual compression method

    International Nuclear Information System (INIS)

    Jauhari, I.; Yusof, H.A.M.; Saidan, R.

    2011-01-01

    Highlights: → Superplastic boronizing. → Dual compression method has been developed. → Hard boride layer. → Bulk deformation was significantly thicker the boronized layer. → New data on boronizing could be expanded the application of DSS in industries. - Abstract: In this work, SPB of duplex stainless steel (DSS) under compression method is studied with the objective to produce ultra hard and thick boronized layer using minimal amount of boron powder and at a much faster boronizing time as compared to the conventional process. SPB is conducted under dual compression methods. In the first method DSS is boronized using a minimal amount of boron powder under a fix pre-strained compression condition throughout the process. The compression strain is controlled in such a way that plastic deformation is restricted at the surface asperities of the substrate in contact with the boron powder. In the second method, the boronized specimen taken from the first mode is compressed superplastically up to a certain compressive strain under a certain strain rate condition. The process in the second method is conducted without the present of boron powder. As compared with the conventional boronizing process, through this SPB under dual compression methods, a much harder and thicker boronized layer thickness is able to be produced using a minimal amount of boron powder.

  17. Superplastic boronizing of duplex stainless steel under dual compression method

    Energy Technology Data Exchange (ETDEWEB)

    Jauhari, I., E-mail: iswadi@um.edu.my [Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia); Yusof, H.A.M.; Saidan, R. [Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia)

    2011-10-25

    Highlights: {yields} Superplastic boronizing. {yields} Dual compression method has been developed. {yields} Hard boride layer. {yields} Bulk deformation was significantly thicker the boronized layer. {yields} New data on boronizing could be expanded the application of DSS in industries. - Abstract: In this work, SPB of duplex stainless steel (DSS) under compression method is studied with the objective to produce ultra hard and thick boronized layer using minimal amount of boron powder and at a much faster boronizing time as compared to the conventional process. SPB is conducted under dual compression methods. In the first method DSS is boronized using a minimal amount of boron powder under a fix pre-strained compression condition throughout the process. The compression strain is controlled in such a way that plastic deformation is restricted at the surface asperities of the substrate in contact with the boron powder. In the second method, the boronized specimen taken from the first mode is compressed superplastically up to a certain compressive strain under a certain strain rate condition. The process in the second method is conducted without the present of boron powder. As compared with the conventional boronizing process, through this SPB under dual compression methods, a much harder and thicker boronized layer thickness is able to be produced using a minimal amount of boron powder.

  18. A rapid compression technique for 4-D functional MRI images using data rearrangement and modified binary array techniques.

    Science.gov (United States)

    Uma Vetri Selvi, G; Nadarajan, R

    2015-12-01

    Compression techniques are vital for efficient storage and fast transfer of medical image data. The existing compression techniques take significant amount of time for performing encoding and decoding and hence the purpose of compression is not fully satisfied. In this paper a rapid 4-D lossy compression method constructed using data rearrangement, wavelet-based contourlet transformation and a modified binary array technique has been proposed for functional magnetic resonance imaging (fMRI) images. In the proposed method, the image slices of fMRI data are rearranged so that the redundant slices form a sequence. The image sequence is then divided into slices and transformed using wavelet-based contourlet transform (WBCT). In WBCT, the high frequency sub-band obtained from wavelet transform is further decomposed into multiple directional sub-bands by directional filter bank to obtain more directional information. The relationship between the coefficients has been changed in WBCT as it has more directions. The differences in parent–child relationships are handled by a repositioning algorithm. The repositioned coefficients are then subjected to quantization. The quantized coefficients are further compressed by modified binary array technique where the most frequently occurring value of a sequence is coded only once. The proposed method has been experimented with fMRI images the results indicated that the processing time of the proposed method is less compared to existing wavelet-based set partitioning in hierarchical trees and set partitioning embedded block coder (SPECK) compression schemes [1]. The proposed method could also yield a better compression performance compared to wavelet-based SPECK coder. The objective results showed that the proposed method could gain good compression ratio in maintaining a peak signal noise ratio value of above 70 for all the experimented sequences. The SSIM value is equal to 1 and the value of CC is greater than 0.9 for all

  19. Evaluation of a wavelet-based compression algorithm applied to the silicon drift detectors data of the ALICE experiment at CERN

    International Nuclear Information System (INIS)

    Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo

    2004-01-01

    This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain

  20. Word aligned bitmap compression method, data structure, and apparatus

    Science.gov (United States)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  1. Lossi legendist, tamme teekonnast ja maali mõistatusest / Alar Läänelaid

    Index Scriptorium Estoniae

    Läänelaid, Alar, 1951-

    2006-01-01

    Dendrokronoloogilisest dateerimisest (aastarõngasdateerimine), mis on toonud selgust Alatskivi lossi söögisaali laekaunistuse dateerimisse, Hans van Esseni maali "Vaikelu homaariga" ning Clara Peetersi "Vaikelu jahisaagiga" valmimisaja ja ehtsuse kindlaksmääramisse

  2. Scaling a network with positive gains to a lossy or gainy network

    NARCIS (Netherlands)

    Koene, J.

    1979-01-01

    Necessary and sufficient conditions are presented under which it is possible to scale a network with positive gains to a lossy or a gainy network. A procedure to perform such a scaling operation is given.

  3. Subband Coding Methods for Seismic Data Compression

    Science.gov (United States)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  4. Lossless Compression of Digital Images

    DEFF Research Database (Denmark)

    Martins, Bo

    Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose coders...... version that is substantially faster than its precursorsand brings it close to the multi-pass coders in compression performance.Handprinted characters are of unequal complexity; recent work by Singer and Tishby demonstrates that utilizing the physiological process of writing one can synthesize cursive.......The feature vector of a bitmap initially constitutes a lossy representation of the contour(s) of the bitmap. The initial feature space is usually too large but can be reduced automatically by use ofa predictive code length or predictive error criterion....

  5. An Enhanced Run-Length Encoding Compression Method for Telemetry Data

    Directory of Open Access Journals (Sweden)

    Shan Yanhu

    2017-09-01

    Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

  6. Lossi hoovis mälestati veretöö ohvreid / Veljo Kuivjõgi

    Index Scriptorium Estoniae

    Kuivjõgi, Veljo, 1951-

    2006-01-01

    Endel Püüa raamatu "Punane terror Saaremaal 1941. aastal" (Saaremaa : Saaremaa Muuseum, 2006) esitlusest ja mälestuspäevast, mis oli pühendatud kõigile ohvritele, kes hukati 1941. aasta suvel Kuressaare lossi hoovis

  7. Word aligned bitmap compression method, data structure, and apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  8. Tantsufestival Momentum 2008 : Alatskivi lossi tunded, mõtted, emotsioonid / Reet Kruup

    Index Scriptorium Estoniae

    Kruup, Reet

    2008-01-01

    25.-27. aprillini Alatskivi Kaunite Kunstide Kuu raames toimunud tantsufestivalist Momentum 2008. Festivalil osalesid Alatskivi vallas tegutsevate tantsurühmade kõrval ka tantsuteater Tee Kuubis, kontakttantsurühmitus Kontakt Tartust ja Tartu Ülikooli Viljandi Kultuuriakadeemia lavakunstide osakonna tudengid, kes esitasid improvisatsioonilise tantsuetenduse "Alatskivi lossi tunded, mõtted, emotsioonid..."

  9. How useful is slow light in enhancing nonlinear interactions in lossy periodic nanostructures?

    DEFF Research Database (Denmark)

    Saravi, Sina; Quintero-Bermudez, Rafael; Setzpfandt, Frank

    2016-01-01

    We investigate analytically, and with nonlinear simulations, the extent of usefulness of slow light for enhancing the efficiency of second harmonic generation in lossy nanostructures, and find that the slower is not always the better....

  10. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  11. Efficient transmission of compressed data for remote volume visualization.

    Science.gov (United States)

    Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S

    2006-09-01

    One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.

  12. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  13. A lossy graph model for delay reduction in generalized instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2014-01-01

    , arising from lossy feedback events, when the expected decoding delay of XORing them among themselves or with other certain packets is lower than that expected when sending these packets separately. We compare the decoding delay performance of LG-IDNC and G

  14. Scattering by multiple parallel radially stratified infinite cylinders buried in a lossy half space.

    Science.gov (United States)

    Lee, Siu-Chun

    2013-07-01

    The theoretical solution for scattering by an arbitrary configuration of closely spaced parallel infinite cylinders buried in a lossy half space is presented in this paper. The refractive index and permeability of the half space and cylinders are complex in general. Each cylinder is radially stratified with a distinct complex refractive index and permeability. The incident radiation is an arbitrarily polarized plane wave propagating in the plane normal to the axes of the cylinders. Analytic solutions are derived for the electric and magnetic fields and the Poynting vector of backscattered radiation emerging from the half space. Numerical examples are presented to illustrate the application of the scattering solution to calculate backscattering from a lossy half space containing multiple homogeneous and radially stratified cylinders at various depths and different angles of incidence.

  15. Effect of image compression and scaling on automated scoring of immunohistochemical stainings and segmentation of tumor epithelium

    Directory of Open Access Journals (Sweden)

    Konsti Juho

    2012-03-01

    Full Text Available Abstract Background Digital whole-slide scanning of tissue specimens produces large images demanding increasing storing capacity. To reduce the need of extensive data storage systems image files can be compressed and scaled down. The aim of this article is to study the effect of different levels of image compression and scaling on automated image analysis of immunohistochemical (IHC stainings and automated tumor segmentation. Methods Two tissue microarray (TMA slides containing 800 samples of breast cancer tissue immunostained against Ki-67 protein and two TMA slides containing 144 samples of colorectal cancer immunostained against EGFR were digitized with a whole-slide scanner. The TMA images were JPEG2000 wavelet compressed with four compression ratios: lossless, and 1:12, 1:25 and 1:50 lossy compression. Each of the compressed breast cancer images was furthermore scaled down either to 1:1, 1:2, 1:4, 1:8, 1:16, 1:32, 1:64 or 1:128. Breast cancer images were analyzed using an algorithm that quantitates the extent of staining in Ki-67 immunostained images, and EGFR immunostained colorectal cancer images were analyzed with an automated tumor segmentation algorithm. The automated tools were validated by comparing the results from losslessly compressed and non-scaled images with results from conventional visual assessments. Percentage agreement and kappa statistics were calculated between results from compressed and scaled images and results from lossless and non-scaled images. Results Both of the studied image analysis methods showed good agreement between visual and automated results. In the automated IHC quantification, an agreement of over 98% and a kappa value of over 0.96 was observed between losslessly compressed and non-scaled images and combined compression ratios up to 1:50 and scaling down to 1:8. In automated tumor segmentation, an agreement of over 97% and a kappa value of over 0.93 was observed between losslessly compressed images and

  16. Wavelet compression algorithm applied to abdominal ultrasound images

    International Nuclear Information System (INIS)

    Lin, Cheng-Hsun; Pan, Su-Feng; LU, Chin-Yuan; Lee, Ming-Che

    2006-01-01

    We sought to investigate acceptable compression ratios of lossy wavelet compression on 640 x 480 x 8 abdominal ultrasound (US) images. We acquired 100 abdominal US images with normal and abnormal findings from the view station of a 932-bed teaching hospital. The US images were then compressed at quality factors (QFs) of 3, 10, 30, and 50 followed outcomes of a pilot study. This was equal to the average compression ratios of 4.3:1, 8.5:1, 20:1 and 36.6:1, respectively. Four objective measurements were carried out to examine and compare the image degradation between original and compressed images. Receiver operating characteristic (ROC) analysis was also introduced for subjective assessment. Five experienced and qualified radiologists as reviewers blinded to corresponding pathological findings, analysed paired 400 randomly ordered images with two 17-inch thin film transistor/liquid crystal display (TFT/LCD) monitors. At ROC analysis, the average area under curve (Az) for US abdominal image was 0.874 at the ratio of 36.6:1. The compressed image size was only 2.7% for US original at this ratio. The objective parameters showed the higher the mean squared error (MSE) or root mean squared error (RMSE) values, the poorer the image quality. The higher signal-to-noise ratio (SNR) or peak signal-to-noise ratio (PSNR) values indicated better image quality. The average RMSE, PSNR at 36.6:1 for US were 4.84 ± 0.14, 35.45 dB, respectively. This finding suggests that, on the basis of the patient sample, wavelet compression of abdominal US to a ratio of 36.6:1 did not adversely affect diagnostic performance or evaluation error for radiologists' interpretation so as to risk affecting diagnosis

  17. On the estimation method of compressed air consumption during pneumatic caisson sinking

    OpenAIRE

    平川, 修治; ヒラカワ, シュウジ; Shuji, HIRAKAWA

    1990-01-01

    There are several methods in estimation of compressed air consumption during pneumatic caisson sinking. It is re uired in the estimation of compressed air consumption by the methods under the same conditions. In this paper, it is proposed the methods which is able to estimate accurately the compressed air consumption during pnbumatic caissons sinking at this moment.

  18. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  19. Receiver-Assisted Congestion Control to Achieve High Throughput in Lossy Wireless Networks

    Science.gov (United States)

    Shi, Kai; Shu, Yantai; Yang, Oliver; Luo, Jiarong

    2010-04-01

    Many applications would require fast data transfer in high-speed wireless networks nowadays. However, due to its conservative congestion control algorithm, Transmission Control Protocol (TCP) cannot effectively utilize the network capacity in lossy wireless networks. In this paper, we propose a receiver-assisted congestion control mechanism (RACC) in which the sender performs loss-based control, while the receiver is performing delay-based control. The receiver measures the network bandwidth based on the packet interarrival interval and uses it to compute a congestion window size deemed appropriate for the sender. After receiving the advertised value feedback from the receiver, the sender then uses the additive increase and multiplicative decrease (AIMD) mechanism to compute the correct congestion window size to be used. By integrating the loss-based and the delay-based congestion controls, our mechanism can mitigate the effect of wireless losses, alleviate the timeout effect, and therefore make better use of network bandwidth. Simulation and experiment results in various scenarios show that our mechanism can outperform conventional TCP in high-speed and lossy wireless environments.

  20. Diffraction of an inhomogeneous plane wave by an impedance wedge in a lossy medium

    CSIR Research Space (South Africa)

    Manara, G

    1998-11-01

    Full Text Available The diffraction of an inhomogeneous plane wave by an impedance wedge embedded in a lossy medium is analyzed. The rigorous integral representation for the field is asymptotically evaluated in the context of the uniform geometrical theory...

  1. High-altitude electromagnetic pulse environment over the lossy ground

    International Nuclear Information System (INIS)

    Xie Yanzhao; Wang Zanji

    2003-01-01

    The electromagnetic field above ground produced by an incident high-altitude electromagnetic pulse plane wave striking the ground plane was described in this paper in terms of the Fresnel reflection coefficients and the numerical FFT. The pulse reflected from the ground plane always cancel the incident field for the horizontal field component, but the reflected field adds to the incident for the vertical field component. The results of several cases for variations in the observation height, angle of incidence and lossy ground electrical parameters were also presented showing different e-field components above the earth

  2. Toward maximum transmittance into absorption layers in solar cells: investigation of lossy-film-induced mismatches between reflectance and transmittance extrema.

    Science.gov (United States)

    Chang, Yin-Jung; Lai, Chi-Sheng

    2013-09-01

    The mismatch in film thickness and incident angle between reflectance and transmittance extrema due to the presence of lossy film(s) is investigated toward the maximum transmittance design in the active region of solar cells. Using a planar air/lossy film/silicon double-interface geometry illustrates important and quite opposite mismatch behaviors associated with TE and TM waves. In a typical thin-film CIGS solar cell, mismatches contributed by TM waves in general dominate. The angular mismatch is at least 10° in about 37%-53% of the spectrum, depending on the thickness combination of all lossy interlayers. The largest thickness mismatch of a specific interlayer generally increases with the thickness of the layer itself. Antireflection coating designs for solar cells should therefore be optimized in terms of the maximum transmittance into the active region, even if the corresponding reflectance is not at its minimum.

  3. Modelling the acoustical response of lossy lamella-crystals

    DEFF Research Database (Denmark)

    Christensen, Johan; Mortensen, N. Asger; Willatzen, Morten

    2014-01-01

    The sound propagation properties of lossy lamella-crystals are analysed theoretically utilizing a rig- orous wave expansion formalism and an effective medium approach. We investigate both sup- ported and free-standing crystal slab structures and predict high absorption for a broad range...... of frequencies. A detailed derivation of the formalism is presented, and we show how the results obtained in the subwavelength and superwavelength regimes qualitatively can be reproduced by homogenizing the lamella-crystals. We come to the conclusion that treating this structure within the metamaterial limit...... only makes sense if the crystal filling fraction is sufficiently large to satisfy an effective medium approach....

  4. Statistical Analysis of Compression Methods for Storing Binary Image for Low-Memory Systems

    Directory of Open Access Journals (Sweden)

    Roman Slaby

    2013-01-01

    Full Text Available The paper is focused on the statistical comparison of the selected compression methods which are used for compression of the binary images. The aim is to asses, which of presented compression method for low-memory system requires less number of bytes of memory. For assessment of the success rates of the input image to binary image the correlation functions are used. Correlation function is one of the methods of OCR algorithm used for the digitization of printed symbols. Using of compression methods is necessary for systems based on low-power micro-controllers. The data stream saving is very important for such systems with limited memory as well as the time required for decoding the compressed data. The success rate of the selected compression algorithms is evaluated using the basic characteristics of the exploratory analysis. The searched samples represent the amount of bytes needed to compress the test images, representing alphanumeric characters.

  5. The boundary data immersion method for compressible flows with application to aeroacoustics

    Energy Technology Data Exchange (ETDEWEB)

    Schlanderer, Stefan C., E-mail: stefan.schlanderer@unimelb.edu.au [Faculty for Engineering and the Environment, University of Southampton, SO17 1BJ Southampton (United Kingdom); Weymouth, Gabriel D., E-mail: G.D.Weymouth@soton.ac.uk [Faculty for Engineering and the Environment, University of Southampton, SO17 1BJ Southampton (United Kingdom); Sandberg, Richard D., E-mail: richard.sandberg@unimelb.edu.au [Department of Mechanical Engineering, University of Melbourne, Melbourne VIC 3010 (Australia)

    2017-03-15

    This paper introduces a virtual boundary method for compressible viscous fluid flow that is capable of accurately representing moving bodies in flow and aeroacoustic simulations. The method is the compressible extension of the boundary data immersion method (BDIM, Maertens & Weymouth (2015), ). The BDIM equations for the compressible Navier–Stokes equations are derived and the accuracy of the method for the hydrodynamic representation of solid bodies is demonstrated with challenging test cases, including a fully turbulent boundary layer flow and a supersonic instability wave. In addition we show that the compressible BDIM is able to accurately represent noise radiation from moving bodies and flow induced noise generation without any penalty in allowable time step.

  6. Investigating low-frequency compression using the Grid method

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Dau, Torsten; MacDonald, Ewen

    2016-01-01

    in literature. Moreover, slopes of the low-level portions of the BM I/O functions estimated at 500 Hz were examined, to determine whether the 500-Hz off-frequency forward masking curves were affected by compression. Overall, the collected data showed a trend confirming the compressive behaviour. However......There is an ongoing discussion about whether the amount of cochlear compression in humans at low frequencies (below 1 kHz) is as high as that at higher frequencies. It is controversial whether the compression affects the slope of the off-frequency forward masking curves at those frequencies. Here......, the Grid method with a 2-interval 1-up 3-down tracking rule was applied to estimate forward masking curves at two characteristic frequencies: 500 Hz and 4000 Hz. The resulting curves and the corresponding basilar membrane input-output (BM I/O) functions were found to be comparable to those reported...

  7. Analysis of a discrete element method and coupling with a compressible fluid flow method

    International Nuclear Information System (INIS)

    Monasse, L.

    2011-01-01

    This work aims at the numerical simulation of compressible fluid/deformable structure interactions. In particular, we have developed a partitioned coupling algorithm between a Finite Volume method for the compressible fluid and a Discrete Element method capable of taking into account fractures in the solid. A survey of existing fictitious domain methods and partitioned algorithms has led to choose an Embedded Boundary method and an explicit coupling scheme. We first showed that the Discrete Element method used for the solid yielded the correct macroscopic behaviour and that the symplectic time-integration scheme ensured the preservation of energy. We then developed an explicit coupling algorithm between a compressible inviscid fluid and an un-deformable solid. Mass, momentum and energy conservation and consistency properties were proved for the coupling scheme. The algorithm was then extended to the coupling with a deformable solid, in the form of a semi implicit scheme. Finally, we applied this method to unsteady inviscid flows around moving structures: comparisons with existing numerical and experimental results demonstrate the excellent accuracy of our method. (author) [fr

  8. Verification testing of the compression performance of the HEVC screen content coding extensions

    Science.gov (United States)

    Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng

    2017-09-01

    This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.

  9. Survey of numerical methods for compressible fluids

    Energy Technology Data Exchange (ETDEWEB)

    Sod, G A

    1977-06-01

    The finite difference methods of Godunov, Hyman, Lax-Wendroff (two-step), MacCormack, Rusanov, the upwind scheme, the hybrid scheme of Harten and Zwas, the antidiffusion method of Boris and Book, and the artificial compression method of Harten are compared with the random choice known as Glimm's method. The methods are used to integrate the one-dimensional equations of gas dynamics for an inviscid fluid. The results are compared and demonstrate that Glimm's method has several advantages. 16 figs., 4 tables.

  10. Theory and Circuit Model for Lossy Coaxial Transmission Line

    Energy Technology Data Exchange (ETDEWEB)

    Genoni, T. C.; Anderson, C. N.; Clark, R. E.; Gansz-Torres, J.; Rose, D. V.; Welch, Dale Robert

    2017-04-01

    The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.

  11. Electrical properties of spherical dipole antennas with lossy material cores

    DEFF Research Database (Denmark)

    Hansen, Troels Vejle; Kim, Oleksiy S.; Breinbjerg, Olav

    2012-01-01

    A spherical magnetic dipole antenna with a linear, isotropic, homogenous, passive, and lossy material core is modeled analytically, and closed form expressions are given for the internally stored magnetic and electric energies, the radiation efficiency, and radiation quality factor. This model...... and all the provided expressions are exact and valid for arbitrary core sizes, permeability, permittivity, electric and magnetic loss tangents. Arbitrary dispersion models for both permeability and permittivity can be applied. In addition, we present an investigation for an antenna of fixed electrical...

  12. Objective video quality measure for application to tele-echocardiography.

    Science.gov (United States)

    Moore, Peter Thomas; O'Hare, Neil; Walsh, Kevin P; Ward, Neil; Conlon, Niamh

    2008-08-01

    Real-time tele-echocardiography is widely used to remotely diagnose or exclude congenital heart defects. Cost effective technical implementation is realised using low-bandwidth transmission systems and lossy compression (videoconferencing) schemes. In our study, DICOM video sequences were converted to common multimedia formats, which were then, compressed using three lossy compression algorithms. We then applied a digital (multimedia) video quality metric (VQM) to determine objectively a value for degradation due to compression. Three levels of compression were simulated by varying system bandwidth and compared to a subjective assessment of video clip quality by three paediatric cardiologists with more than 5 years of experience.

  13. Context-dependent JPEG backward-compatible high-dynamic range image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  14. A method of loss free compression for the data of nuclear spectrum

    International Nuclear Information System (INIS)

    Sun Mingshan; Wu Shiying; Chen Yantao; Xu Zurun

    2000-01-01

    A new method of loss free compression based on the feature of the data of nuclear spectrum is provided, from which a practicable algorithm is successfully derived. A compression rate varying from 0.50 to 0.25 is obtained and the distribution of the processed data becomes even more suitable to be reprocessed by another compression such as Huffman Code to improve the compression rate

  15. Novel 3D Compression Methods for Geometry, Connectivity and Texture

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2016-06-01

    A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.

  16. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation.

    Science.gov (United States)

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-05-23

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

  17. Lossy effects on the lateral shifts in negative-phase-velocity medium

    International Nuclear Information System (INIS)

    You Yuan

    2009-01-01

    Theoretical investigations of the lateral shifts of the reflected and transmitted beams were performed, using the stationary-phase approach, for the planar interface of a conventional medium and a lossy negative-phase-velocity medium. The lateral shifts exhibit different behaviors beyond and below a certain angle, for both incident p-polarized and incident s-polarized plane waves. Loss in the negative-phase-velocity medium affects lateral shifts greatly, and may cause changes from negative to positive values for p-polarized incidence

  18. The Diagonal Compression Field Method using Circular Fans

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2005-01-01

    This paper presents a new design method, which is a modification of the diagonal compression field method, the modification consisting of the introduction of circular fan stress fields. The traditional method does not allow changes of the concrete compression direction throughout a given beam...... if equilibrium is strictly required. This is conservative, since it is not possible fully to utilize the concrete strength in regions with low shear stresses. The larger inclination (the smaller -value) of the uniaxial concrete stress the more transverse shear reinforcement is needed; hence it would be optimal...... if the -value for a given beam could be set to a low value in regions with high shear stresses and thereafter increased in regions with low shear stresses. Thus the shear reinforcement would be reduced and the concrete strength would be utilized in a better way. In the paper it is shown how circular fan stress...

  19. Decoherence in quantum lossy systems: superoperator and matrix techniques

    Science.gov (United States)

    Yazdanpanah, Navid; Tavassoly, Mohammad Kazem; Moya-Cessa, Hector Manuel

    2017-06-01

    Due to the unavoidably dissipative interaction between quantum systems with their environments, the decoherence flows inevitably into the systems. Therefore, to achieve a better understanding on how decoherence affects on the damped systems, a fundamental investigation of master equation seems to be required. In this regard, finding out the missed information which has been lost due to irreversibly of the dissipative systems, is also of practical importance in quantum information science. Motivating by these facts, in this work we want to use superoperator and matrix techniques, by which we are able to illustrate two methods to obtain the explicit form of density operators corresponding to damped systems at arbitrary temperature T ≥ 0. To establish the potential abilities of the suggested methods, we apply them to deduce the density operator of some practical well-known quantum systems. Using the superoperator techniques, at first we obtain the density operator of a damped system which includes a qubit interacting with a single-mode quantized field within an optical cavity. As the second system, we study the decoherence of a quantized field within an optical damped cavity. We also use our proposed matrix method to study the decoherence of a system which includes two qubits in the interaction with each other via dipole-dipole interaction and at the same time with a quantized field in a lossy cavity. The influences of dissipation on the decoherence of dynamical properties of these systems are also numerically investigated. At last, the advantages of the proposed superoperator techniques in comparison with matrix method are explained.

  20. Robust Event-Triggered Energy-to-Peak Filtering for Polytopic Uncertain Systems over Lossy Network with Quantized Measurements

    Directory of Open Access Journals (Sweden)

    Jidong Wang

    2016-01-01

    Full Text Available The event-triggered energy-to-peak filtering for polytopic discrete-time linear systems is studied with the consideration of lossy network and quantization error. Because of the communication imperfections from the packet dropout of lossy link, the event-triggered condition used to determine the data release instant at the event generator (EG can not be directly applied to update the filter input at the zero order holder (ZOH when performing filter performance analysis and synthesis. In order to balance such nonuniform time series between the triggered instant of EG and the updated instant of ZOH, two event-triggered conditions are defined, respectively, whereafter a worst-case bound on the number of consecutive packet losses of the transmitted data from EG is given, which marginally guarantees the effectiveness of the filter that will be designed based on the event-triggered updating condition of ZOH. Then, the filter performance analysis conditions are obtained under the assumption that the maximum number of packet losses is allowable for the worst-case bound. In what follows, a two-stage LMI-based alternative optimization approach is proposed to separately design the filter, which reduces the conservatism of the traditional linearization method of filter analysis conditions. Subsequently a codesign algorithm is developed to determine the communication and filter parameters simultaneously. Finally, an illustrative example is provided to verify the validity of the obtained results.

  1. Quinary excitation method for pulse compression ultrasound measurements.

    Science.gov (United States)

    Cowell, D M J; Freear, S

    2008-04-01

    A novel switched excitation method for linear frequency modulated excitation of ultrasonic transducers in pulse compression systems is presented that is simple to realise, yet provides reduced signal sidelobes at the output of the matched filter compared to bipolar pseudo-chirp excitation. Pulse compression signal sidelobes are reduced through the use of simple amplitude tapering at the beginning and end of the excitation duration. Amplitude tapering using switched excitation is realised through the use of intermediate voltage switching levels, half that of the main excitation voltages. In total five excitation voltages are used creating a quinary excitation system. The absence of analogue signal generation and power amplifiers renders the excitation method attractive for applications with requirements such as a high channel count or low cost per channel. A systematic study of switched linear frequency modulated excitation methods with simulated and laboratory based experimental verification is presented for 2.25 MHz non-destructive testing immersion transducers. The signal to sidelobe noise level of compressed waveforms generated using quinary and bipolar pseudo-chirp excitation are investigated for transmission through a 0.5m water and kaolin slurry channel. Quinary linear frequency modulated excitation consistently reduces signal sidelobe power compared to bipolar excitation methods. Experimental results for transmission between two 2.25 MHz transducers separated by a 0.5m channel of water and 5% kaolin suspension shows improvements in signal to sidelobe noise power in the order of 7-8 dB. The reported quinary switched method for linear frequency modulated excitation provides improved performance compared to pseudo-chirp excitation without the need for high performance excitation amplifiers.

  2. Compressed Sensing Methods in Radio Receivers Exposed to Noise and Interference

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek

    , there is a problem of interference, which makes digitization of radio receivers even more dicult. High-order low-pass lters are needed to remove interfering signals and secure a high-quality reception. In the mid-2000s a new method of signal acquisition, called compressed sensing, emerged. Compressed sensing...... the downconverted baseband signal and interference, may be replaced by low-order lters. Additional digital signal processing is a price to pay for this feature. Hence, the signal processing is moved from the analog to the digital domain. Filtering compressed sensing, which is a new application of compressed sensing...

  3. Security of modified Ping-Pong protocol in noisy and lossy channel

    OpenAIRE

    Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-01

    The “Ping-Pong” (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove ...

  4. An ROI multi-resolution compression method for 3D-HEVC

    Science.gov (United States)

    Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan

    2017-09-01

    3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.

  5. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    Science.gov (United States)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  6. Quality Factor and Radiation Efficiency of Dual-Mode Self-Resonant Spherical Antennas With Lossy Magnetodielectric Cores

    DEFF Research Database (Denmark)

    Hansen, Troels Vejle; Kim, Oleksiy S.; Breinbjerg, Olav

    2014-01-01

    For spherical antennas consisting of a solid magnetodielectric lossy core with an impressed surface current density exciting a superposition of the ${\\rm TE}_{mn}$ and ${\\rm TM}_{mn}$ spherical modes, we analytically determine the radiation quality factor $Q$ and radiation efficiency $e$ . Also, we...

  7. The Effects of Different Curing Methods on the Compressive Strength of Terracrete

    Directory of Open Access Journals (Sweden)

    O. Alake

    2009-01-01

    Full Text Available This research evaluated the effects of different curing methods on the compressive strength of terracrete. Several tests that included sieve analysis were carried out on constituents of terracrete (granite and laterite to determine their particle size distribution and performance criteria tests to determine compressive strength of terracrete cubes for 7 to 35 days of curing. Sand, foam-soaked, tank and open methods of curing were used and the study was carried out under controlled temperature. Sixty cubes of 100 × 100 × 100mm sized cubes were cast using a mix ratio of 1 part of cement, 1½ part of latrite, and 3 part of coarse aggregate (granite proportioned by weight and water – cement ratio of 0.62. The result of the various compressive strengths of the cubes showed that out of the four curing methods, open method of curing was the best because the cubes gained the highest average compressive strength of 10.3N/mm2 by the 35th day.

  8. REMOTE SENSING IMAGE QUALITY ASSESSMENT EXPERIMENT WITH POST-PROCESSING

    Directory of Open Access Journals (Sweden)

    W. Jiang

    2018-04-01

    Full Text Available This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  9. Image-Based Compression Method of Three-Dimensional Range Data with Texture

    OpenAIRE

    Chen, Xia; Bell, Tyler; Zhang, Song

    2017-01-01

    Recently, high speed and high accuracy three-dimensional (3D) scanning techniques and commercially available 3D scanning devices have made real-time 3D shape measurement and reconstruction possible. The conventional mesh representation of 3D geometry, however, results in large file sizes, causing difficulties for its storage and transmission. Methods for compressing scanned 3D data therefore become desired. This paper proposes a novel compression method which stores 3D range data within the c...

  10. Technical note: New table look-up lossless compression method ...

    African Journals Online (AJOL)

    Technical note: New table look-up lossless compression method based on binary index archiving. ... International Journal of Engineering, Science and Technology ... This paper intends to present a common use archiver, made up following the dictionary technique and using the index archiving method as a simple and ...

  11. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  12. A new method of on-line multiparameter amplitude analysis with compression

    International Nuclear Information System (INIS)

    Morhac, M.; matousek, V.

    1996-01-01

    An algorithm of one-line multidimensional amplitude analysis with compression using fast adaptive orthogonal transform is presented in the paper. The method is based on a direct modification of multiplication coefficients of the signal flow graph of the fast Cooley-Tukey's algorithm. The coefficients are modified according to a reference vector representing the processed data. The method has been tested to compress three parameter experimental nuclear data. The efficiency of the derived adaptive transform is compared with classical orthogonal transforms. (orig.)

  13. Methods of compression of digital holograms, based on 1-level wavelet transform

    International Nuclear Information System (INIS)

    Kurbatova, E A; Cheremkhin, P A; Evtikhiev, N N

    2016-01-01

    To reduce the size of memory required for storing information about 3D-scenes and to decrease the rate of hologram transmission, digital hologram compression can be used. Compression of digital holograms by wavelet transforms is among most powerful methods. In the paper the most popular wavelet transforms are considered and applied to the digital hologram compression. Obtained values of reconstruction quality and hologram's diffraction efficiencies are compared. (paper)

  14. On the Use of Normalized Compression Distances for Image Similarity Detection

    Directory of Open Access Journals (Sweden)

    Dinu Coltuc

    2018-01-01

    Full Text Available This paper investigates the usefulness of the normalized compression distance (NCD for image similarity detection. Instead of the direct NCD between images, the paper considers the correlation between NCD based feature vectors extracted for each image. The vectors are derived by computing the NCD between the original image and sequences of translated (rotated versions. Feature vectors for simple transforms (circular translations on horizontal, vertical, diagonal directions and rotations around image center and several standard compressors are generated and tested in a very simple experiment of similarity detection between the original image and two filtered versions (median and moving average. The promising vector configurations (geometric transform, lossless compressor are further tested for similarity detection on the 24 images of the Kodak set subject to some common image processing. While the direct computation of NCD fails to detect image similarity even in the case of simple median and moving average filtering in 3 × 3 windows, for certain transforms and compressors, the proposed approach appears to provide robustness at similarity detection against smoothing, lossy compression, contrast enhancement, noise addition and some robustness against geometrical transforms (scaling, cropping and rotation.

  15. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    Science.gov (United States)

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  16. New Methods of Stereo Encoding for FM Radio Broadcasting Based on Digital Technology

    Directory of Open Access Journals (Sweden)

    P. Stranak

    2007-12-01

    Full Text Available The article describes new methods of stereo encoding for FM radio broadcasting. Digital signal processing makes possible to construct an encoder with properties that are not attainable using conventional analog solutions. The article describes the mathematical model of the encoder, on the basis of which a specific program code for DSP was developed. The article further deals with a new method of composite clipping which does not cause impurities in the output spectrum, and at the same time preserves high separation between the left and right audio channels. The application of the new method is useful mainly where there are unwanted signal overshoots on the input of the stereo encoder, e.g., in case of signal transmission from the studio to the transmitter site through a route with psychoacoustic lossy compression of data rate.

  17. Generalised synchronisation of spatiotemporal chaos using feedback control method and phase compression

    International Nuclear Information System (INIS)

    Xing-Yuan, Wang; Na, Zhang

    2010-01-01

    Coupled map lattices are taken as examples to study the synchronisation of spatiotemporal chaotic systems. First, a generalised synchronisation of two coupled map lattices is realised through selecting an appropriate feedback function and appropriate range of feedback parameter. Based on this method we use the phase compression method to extend the range of the parameter. So, we integrate the feedback control method with the phase compression method to implement the generalised synchronisation and obtain an exact range of feedback parameter. This technique is simple to implement in practice. Numerical simulations show the effectiveness and the feasibility of the proposed program. (general)

  18. The Diagonal Compression Field Method using Circular Fans

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2006-01-01

    is a modification of the traditional method, the modification consisting of the introduction of circular fan stress fields. To ensure proper behaviour for the service load the -value ( = cot, where  is the angle relative to the beam axis of the uniaxial concrete compression) chosen should not be too large...

  19. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  20. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    Science.gov (United States)

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  1. Statistics-Based Compression of Global Wind Fields

    KAUST Repository

    Jeong, Jaehong

    2017-02-07

    Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth\\'s orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.

  2. Statistics-Based Compression of Global Wind Fields

    KAUST Repository

    Jeong, Jaehong; Castruccio, Stefano; Crippa, Paola; Genton, Marc G.

    2017-01-01

    Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth's orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.

  3. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  4. Semi-implicit method for three-dimensional compressible MHD simulation

    International Nuclear Information System (INIS)

    Harned, D.S.; Kerner, W.

    1984-03-01

    A semi-implicit method for solving the full compressible MHD equations in three dimensions is presented. The method is unconditionally stable with respect to the fast compressional modes. The time step is instead limited by the slower shear Alfven motion. The computing time required for one time step is essentially the same as for explicit methods. Linear stability limits are derived and verified by three-dimensional tests on linear waves in slab geometry. (orig.)

  5. Efficient Time-Domain Ray-Tracing Technique for the Analysis of Ultra-Wideband Indoor Environments including Lossy Materials and Multiple Effects

    Directory of Open Access Journals (Sweden)

    F. Saez de Adana

    2009-01-01

    Full Text Available This paper presents an efficient application of the Time-Domain Uniform Theory of Diffraction (TD-UTD for the analysis of Ultra-Wideband (UWB mobile communications for indoor environments. The classical TD-UTD formulation is modified to include the contribution of lossy materials and multiple-ray interactions with the environment. The electromagnetic analysis is combined with a ray-tracing acceleration technique to treat realistic and complex environments. The validity of this method is tested with measurements performed inside the Polytechnic building of the University of Alcala and shows good performance of the model for the analysis of UWB propagation.

  6. Combustion engine variable compression ratio apparatus and method

    Science.gov (United States)

    Lawrence,; Keith, E [Peoria, IL; Strawbridge, Bryan E [Dunlap, IL; Dutart, Charles H [Washington, IL

    2006-06-06

    An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.

  7. A microcontroller-based interface circuit for lossy capacitive sensors

    International Nuclear Information System (INIS)

    Reverter, Ferran; Casas, Òscar

    2010-01-01

    This paper introduces and analyses a low-cost microcontroller-based interface circuit for lossy capacitive sensors, i.e. sensors whose parasitic conductance (G x ) is not negligible. Such a circuit relies on a previous circuit also proposed by the authors, in which the sensor is directly connected to a microcontroller without using either a signal conditioner or an analogue-to-digital converter in the signal path. The novel circuit uses the same hardware, but it performs an additional measurement and executes a new calibration technique. As a result, the sensitivity of the circuit to G x decreases significantly (a factor higher than ten), but not completely due to the input capacitances of the port pins of the microcontroller. Experimental results show a relative error in the capacitance measurement below 1% for G x x ) shows the effectiveness of the circuit

  8. About a method for compressing x-ray computed microtomography data

    Science.gov (United States)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  9. Block selective redaction for minimizing loss during de-identification of burned in text in irreversibly compressed JPEG medical images.

    Science.gov (United States)

    Clunie, David A; Gebow, Dan

    2015-01-01

    Deidentification of medical images requires attention to both header information as well as the pixel data itself, in which burned-in text may be present. If the pixel data to be deidentified is stored in a compressed form, traditionally it is decompressed, identifying text is redacted, and if necessary, pixel data are recompressed. Decompression without recompression may result in images of excessive or intractable size. Recompression with an irreversible scheme is undesirable because it may cause additional loss in the diagnostically relevant regions of the images. The irreversible (lossy) JPEG compression scheme works on small blocks of the image independently, hence, redaction can selectively be confined only to those blocks containing identifying text, leaving all other blocks unchanged. An open source implementation of selective redaction and a demonstration of its applicability to multiframe color ultrasound images is described. The process can be applied either to standalone JPEG images or JPEG bit streams encapsulated in other formats, which in the case of medical images, is usually DICOM.

  10. A method of vehicle license plate recognition based on PCANet and compressive sensing

    Science.gov (United States)

    Ye, Xianyi; Min, Feng

    2018-03-01

    The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.

  11. Lossy and retardation effects on the localization of EM waves using a left-handed medium slab

    International Nuclear Information System (INIS)

    Cheng Qiang; Cui Tiejun; Lu Weibing

    2005-01-01

    It has been shown that a left-handed medium (LHM) slab with negative permittivity -ε0 and negative permeability -μ0 can be used to localize electromagnetic waves [T.J. Cui et al., Phys. Rev. B (January 2005)]. If two current sources with the same amplitudes and opposite directions are placed at the perfect-imaging points of the LHM slab, we have shown that all electromagnetic waves are completely confined in a region between the two sources. In this Letter, a slightly mismatched and lossy LHM lens is studied, where both the relative permittivity and permeability are slightly different from -1, and the lossy and retardation effects on the electromagnetic-wave localization are investigated. Due to the loss and retardation, strong surface waves exist along the slab surfaces. When two current sources are located at the perfect imaging points symmetrically, we show that electromagnetic waves are nearly confined in the region between the two sources and few energies are radiated outside if the retardation and loss are small. When the loss becomes larger, more energies will flow out of the region. Numerical experiments are given to illustrate the above conclusions

  12. Sink-to-Sink Coordination Framework Using RPL: Routing Protocol for Low Power and Lossy Networks

    Directory of Open Access Journals (Sweden)

    Meer M. Khan

    2016-01-01

    Full Text Available RPL (Routing Protocol for low power and Lossy networks is recommended by Internet Engineering Task Force (IETF for IPv6-based LLNs (Low Power and Lossy Networks. RPL uses a proactive routing approach and each node always maintains an active path to the sink node. Sink-to-sink coordination defines syntax and semantics for the exchange of any network defined parameters among sink nodes like network size, traffic load, mobility of a sink, and so forth. The coordination allows sink to learn about the network condition of neighboring sinks. As a result, sinks can make coordinated decision to increase/decrease their network size for optimizing over all network performance in terms of load sharing, increasing network lifetime, and lowering end-to-end latency of communication. Currently, RPL does not provide any coordination framework that can define message exchange between different sink nodes for enhancing the network performance. In this paper, a sink-to-sink coordination framework is proposed which utilizes the periodic route maintenance messages issued by RPL to exchange network status observed at a sink with its neighboring sinks. The proposed framework distributes network load among sink nodes for achieving higher throughputs and longer network’s life time.

  13. Nonstop lose-less data acquisition and storing method for plasma motion images

    International Nuclear Information System (INIS)

    Nakanishi, Hideya; Ohsuna, Masaki; Kojima, Mamoru; Nonomura, Miki; Nagayama, Yoshio; Kawahata, Kazuo; Imazu, Setsuo; Okumura, Haruhiko

    2007-01-01

    Plasma diagnostic data analysis often requires the original raw data as they are, in other words, at the same frame rate and resolution of the CCD camera sensor. As a non-interlace VGA camera typically generates over 70 MB/s video stream, usual frame grabber cards apply the lossy compression encoder, such as mpeg-1/-2 or mpeg-4, to drastically lessen the bit rate. In this study, a new approach, which makes it possible to acquire and store such the wideband video stream without any quality reduction, has been successfully achieved. Simultaneously, the real-time video streaming is even possible at the original frame rate. For minimising the exclusive access time in every data storing, it has adopted the directory structure to hold every frame files separately, instead of one long consecutive file. The popular 'zip' archive method improves the portability of data files, however, the JPEG-LS image compression is applied inside by replacing its intrinsic deflate/inflate algorithm that has less performances for image data. (author)

  14. An Improved Ghost-cell Immersed Boundary Method for Compressible Inviscid Flow Simulations

    KAUST Repository

    Chi, Cheng

    2015-05-01

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. In addition, a shock sensor is in- troduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently. The improved ghost-cell method is validated against five test cases: (a) double Mach reflections on a ramp, (b) supersonic flows in a wind tunnel with a forward- facing step, (c) supersonic flows over a circular cylinder, (d) smooth Prandtl-Meyer expansion flows, and (e) steady shock-induced combustion over a wedge. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Implementation of the improved ghost-cell method in reacting Euler flows further validates its general applicability for compressible flow simulations.

  15. A GPU-accelerated implicit meshless method for compressible flows

    Science.gov (United States)

    Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng

    2018-05-01

    This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.

  16. Gestion participative du sanctuaire de gorilles de plaine de l'ouest (Gorilla gorilla gorilla de Lossi en République du Congo- Brazzaville: première analyse de résultats et des contraintes

    Directory of Open Access Journals (Sweden)

    Mbété, RA.

    2007-01-01

    Full Text Available Participative Management of the Sanctuary of Western Lowland Gorillas (Gorilla gorilla gorilla of Lossi in Republic of Congo-Brazzaville: Preliminary Results and Constraints Analysis. The gorilla sanctuary of Lossi experiments the synergy between scientific research and nature conservation. Three partners are involved in a management participative process. These partners include the Republic of Congo, the local community of Lossi and the European programme on the forest ecosystems in Central Africa. An investigation was carried out on the sanctuary of Lossi in 2003, in order to study in situ the effects generated by the participative management and to identify the constraints linked to the participative approach. The work of primatologists allowed the habituation of the gorillas to the human presence and opened eyesight tourism of western lowland gorillas. A camp for tourists and the access road to the sanctuary have been constructed. The tourism generated jobs in favour of the local population which is also a take-partner of contracts on road repairing. The income from the tourism allowed the construction of a health centre. However, the works of researchers and tourism activities failed during the outbreaks of the Ebola hemorrhagic fever and during the three civil war episodes. The consolidation and the long term of this process of co-management of natural resources of Lossi remains the establishment of a management that should include conservation, rural development and scientific research, with equitably in the distribution of gain between the partnerses.

  17. Numerical simulation of compressible two-phase flow using a diffuse interface method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Daramizadeh, A.

    2013-01-01

    Highlights: ► Compressible two-phase gas–gas and gas–liquid flows simulation are conducted. ► Interface conditions contain shock wave and cavitations. ► A high-resolution diffuse interface method is investigated. ► The numerical results exhibit very good agreement with experimental results. -- Abstract: In this article, a high-resolution diffuse interface method is investigated for simulation of compressible two-phase gas–gas and gas–liquid flows, both in the presence of shock wave and in flows with strong rarefaction waves similar to cavitations. A Godunov method and HLLC Riemann solver is used for discretization of the Kapila five-equation model and a modified Schmidt equation of state (EOS) is used to simulate the cavitation regions. This method is applied successfully to some one- and two-dimensional compressible two-phase flows with interface conditions that contain shock wave and cavitations. The numerical results obtained in this attempt exhibit very good agreement with experimental results, as well as previous numerical results presented by other researchers based on other numerical methods. In particular, the algorithm can capture the complex flow features of transient shocks, such as the material discontinuities and interfacial instabilities, without any oscillation and additional diffusion. Numerical examples show that the results of the method presented here compare well with other sophisticated modeling methods like adaptive mesh refinement (AMR) and local mesh refinement (LMR) for one- and two-dimensional problems

  18. METHOD AND APPARATUS FOR INSPECTION OF COMPRESSED DATA PACKAGES

    DEFF Research Database (Denmark)

    2008-01-01

    to be transferred over the data network. The method comprises the steps of: a) extracting payload data from the payload part of the package, b) appending the extracted payload data to a stream of data, c) probing the data package header so as to determine the compression scheme that is applied to the payload data...

  19. Methods for Sampling and Measurement of Compressed Air Contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Stroem, L

    1976-10-15

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  20. Methods for Sampling and Measurement of Compressed Air Contaminants

    International Nuclear Information System (INIS)

    Stroem, L.

    1976-10-01

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  1. Methods for determining the carrying capacity of eccentrically compressed concrete elements

    Directory of Open Access Journals (Sweden)

    Starishko Ivan Nikolaevich

    2014-04-01

    Full Text Available The author presents the results of calculations of eccentrically compressed elements in the ultimate limit state of bearing capacity, taking into account all possiblestresses in the longitudinal reinforcement from the R to the R , caused by different values of eccentricity longitudinal force. The method of calculation is based on the simultaneous solution of the equilibrium equations of the longitudinal forces and internal forces with the equilibrium equations of bending moments in the ultimate limit state of the normal sections. Simultaneous solution of these equations, as well as additional equations, reflecting the stress-strain limit state elements, leads to the solution of a cubic equation with respect to height of uncracked concrete, or with respect to the carrying capacity. According to the author it is a significant advantage over the existing methods, in which the equilibrium equations using longitudinal forces obtained one value of the height, and the equilibrium equations of bending moments - another. Theoretical studies of the author, in this article and the reasons to calculate specific examples showed that a decrease in the eccentricity of the longitudinal force in the limiting state of eccentrically compressed concrete elements height uncracked concrete height increases, the tension in the longitudinal reinforcement area gradually (not abruptly goes from a state of tension compression, and load-bearing capacity of elements it increases, which is also confirmed by the experimental results. Designed journalist calculations of eccentrically compressed elements for 4 cases of eccentric compression, instead of 2 - as set out in the regulations, fully cover the entire spectrum of possible cases of the stress-strain limit state elements that comply with the European standards for reinforced concrete, in particular Eurocode 2 (2003.

  2. Enhanced inertia from lossy effective fluids using multi-scale sonic crystals

    Directory of Open Access Journals (Sweden)

    Matthew D. Guild

    2014-12-01

    Full Text Available In this work, a recent theoretically predicted phenomenon of enhanced permittivity with electromagnetic waves using lossy materials is investigated for the analogous case of mass density and acoustic waves, which represents inertial enhancement. Starting from fundamental relationships for the homogenized quasi-static effective density of a fluid host with fluid inclusions, theoretical expressions are developed for the conditions on the real and imaginary parts of the constitutive fluids to have inertial enhancement, which are verified with numerical simulations. Realizable structures are designed to demonstrate this phenomenon using multi-scale sonic crystals, which are fabricated using a 3D printer and tested in an acoustic impedance tube, yielding good agreement with the theoretical predictions and demonstrating enhanced inertia.

  3. Coherent control of long-distance steady-state entanglement in lossy resonator arrays

    Science.gov (United States)

    Angelakis, D. G.; Dai, L.; Kwek, L. C.

    2010-07-01

    We show that coherent control of the steady-state long-distance entanglement between pairs of cavity-atom systems in an array of lossy and driven coupled resonators is possible. The cavities are doped with atoms and are connected through waveguides, other cavities or fibers depending on the implementation. We find that the steady-state entanglement can be coherently controlled through the tuning of the phase difference between the driving fields. It can also be surprisingly high in spite of the pumps being classical fields. For some implementations where the connecting element can be a fiber, long-distance steady-state quantum correlations can be established. Furthermore, the maximal of entanglement for any pair is achieved when their corresponding direct coupling is much smaller than their individual couplings to the third party. This effect is reminiscent of the establishment of coherence between otherwise uncoupled atomic levels using classical coherent fields. We suggest a method to measure this entanglement by analyzing the correlations of the emitted photons from the array and also analyze the above results for a range of values of the system parameters, different network geometries and possible implementation technologies.

  4. Biometric and Emotion Identification: An ECG Compression Based Method.

    Science.gov (United States)

    Brás, Susana; Ferreira, Jacqueline H T; Soares, Sandra C; Pinho, Armando J

    2018-01-01

    We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  5. A Finite Element Method for Simulation of Compressible Cavitating Flows

    Science.gov (United States)

    Shams, Ehsan; Yang, Fan; Zhang, Yu; Sahni, Onkar; Shephard, Mark; Oberai, Assad

    2016-11-01

    This work focuses on a novel approach for finite element simulations of multi-phase flows which involve evolving interface with phase change. Modeling problems, such as cavitation, requires addressing multiple challenges, including compressibility of the vapor phase, interface physics caused by mass, momentum and energy fluxes. We have developed a mathematically consistent and robust computational approach to address these problems. We use stabilized finite element methods on unstructured meshes to solve for the compressible Navier-Stokes equations. Arbitrary Lagrangian-Eulerian formulation is used to handle the interface motions. Our method uses a mesh adaptation strategy to preserve the quality of the volumetric mesh, while the interface mesh moves along with the interface. The interface jump conditions are accurately represented using a discontinuous Galerkin method on the conservation laws. Condensation and evaporation rates at the interface are thermodynamically modeled to determine the interface velocity. We will present initial results on bubble cavitation the behavior of an attached cavitation zone in a separated boundary layer. We acknowledge the support from Army Research Office (ARO) under ARO Grant W911NF-14-1-0301.

  6. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  7. AN ENCODING METHOD FOR COMPRESSING GEOGRAPHICAL COORDINATES IN 3D SPACE

    Directory of Open Access Journals (Sweden)

    C. Qian

    2017-09-01

    Full Text Available This paper proposed an encoding method for compressing geographical coordinates in 3D space. By the way of reducing the length of geographical coordinates, it helps to lessen the storage size of geometry information. In addition, the encoding algorithm subdivides the whole space according to octree rules, which enables progressive transmission and loading. Three main steps are included in this method: (1 subdividing the whole 3D geographic space based on octree structure, (2 resampling all the vertices in 3D models, (3 encoding the coordinates of vertices with a combination of Cube Index Code (CIC and Geometry Code. A series of geographical 3D models were applied to evaluate the encoding method. The results showed that this method reduced the storage size of most test data by 90 % or even more under the condition of a speed of encoding and decoding. In conclusion, this method achieved a remarkable compression rate in vertex bit size with a steerable precision loss. It shall be of positive meaning to the web 3d map storing and transmission.

  8. Lagrangian particle method for compressible fluid dynamics

    Science.gov (United States)

    Samulyak, Roman; Wang, Xingyu; Chen, Hsin-Chiang

    2018-06-01

    A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface/multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremal points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free interfaces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order. The method is generalizable to coupled hyperbolic-elliptic systems. Numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.

  9. A blended pressure/density based method for the computation of incompressible and compressible flows

    International Nuclear Information System (INIS)

    Rossow, C.-C.

    2003-01-01

    An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation

  10. Contextual Compression of Large-Scale Wind Turbine Array Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research (NCAR)

    2017-12-04

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.

  11. Technical Note: Validation of two methods to determine contact area between breast and compression paddle in mammography

    NARCIS (Netherlands)

    Branderhorst, Woutjan; de Groot, Jerry E.; van Lier, Monique G. J. T. B.; Highnam, Ralph P.; den Heeten, Gerard J.; Grimbergen, Cornelis A.

    2017-01-01

    Purpose: To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. Methods: For a

  12. Applicability of finite element method to collapse analysis of steel connection under compression

    International Nuclear Information System (INIS)

    Zhou, Zhiguang; Nishida, Akemi; Kuwamura, Hitoshi

    2010-01-01

    It is often necessary to study the collapse behavior of steel connections. In this study, the limit load of the steel pyramid-to-tube socket connection subjected to uniform compression was investigated by means of FEM and experiment. The steel connection was modeled using 4-node shell element. Three kinds of analysis were conducted: linear buckling, nonlinear buckling and modified Riks method analysis. For linear buckling analysis the linear eigenvalue analysis was done. For nonlinear buckling analysis, eigenvalue analysis was performed for buckling load in a nonlinear manner based on the incremental stiffness matrices, and nonlinear material properties and large displacement were considered. For modified Riks method analysis compressive load was loaded by using the modified Riks method, and nonlinear material properties and large displacement were considered. The results of FEM analyses were compared with the experimental results. It shows that nonlinear buckling and modified Riks method analyses are more accurate than linear buckling analysis because they employ nonlinear, large-deflection analysis to estimate buckling loads. Moreover, the calculated limit loads from nonlinear buckling and modified Riks method analysis are close. It can be concluded that modified Riks method analysis is more effective for collapse analysis of steel connection under compression. At last, modified Riks method analysis is used to do the parametric studies of the thickness of the pyramid. (author)

  13. Fast reversible wavelet image compressor

    Science.gov (United States)

    Kim, HyungJun; Li, Ching-Chung

    1996-10-01

    We present a unified image compressor with spline biorthogonal wavelets and dyadic rational filter coefficients which gives high computational speed and excellent compression performance. Convolutions with these filters can be preformed by using only arithmetic shifting and addition operations. Wavelet coefficients can be encoded with an arithmetic coder which also uses arithmetic shifting and addition operations. Therefore, from the beginning to the end, the while encoding/decoding process can be done within a short period of time. The proposed method naturally extends form the lossless compression to the lossy but high compression range and can be easily adapted to the progressive reconstruction.

  14. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    Science.gov (United States)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  15. Biometric and Emotion Identification: An ECG Compression Based Method

    Directory of Open Access Journals (Sweden)

    Susana Brás

    2018-04-01

    Full Text Available We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG. The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1 conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2 conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3 identification of the ECG record class, using a 1-NN (nearest neighbor classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.

  16. Biometric and Emotion Identification: An ECG Compression Based Method

    Science.gov (United States)

    Brás, Susana; Ferreira, Jacqueline H. T.; Soares, Sandra C.; Pinho, Armando J.

    2018-01-01

    We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model. PMID:29670564

  17. The combined effect of side-coupled gain cavity and lossy cavity on the plasmonic response of metal-dielectric-metal surface plasmon polariton waveguide

    International Nuclear Information System (INIS)

    Zhu, Qiong-gan; Wang, Zhi-guo; Tan, Wei

    2014-01-01

    The combined effect of side-coupled gain cavity and lossy cavity on the plasmonic response of metal-dielectric-metal (MDM) surface plasmon polariton (SPP) waveguide is investigated theoretically using Green's function method. Our result suggests that the gain and loss parameters influence the amplitude and phase of the fields localized in the two cavities. For the case of balanced gain and loss, the fields of the two cavities are always of equi-amplitude but out of phase. A plasmon induced transparency (PIT)-like transmission peak can be achieved by the destructive interference of two fields with anti-phase. For the case of unbalanced gain and loss, some unexpected responses of structure are generated. When the gain is more than the loss, the system response is dissipative at around the resonant frequency of the two cavities, where the sum of reflectance and transmittance becomes less than one. This is because the lossy cavity, with a stronger localized field, makes the main contribution to the system response. When the gain is less than the loss, the reverse is true. It is found that the metal loss dissipates the system energy but facilitates the gain cavity to make a dominant effect on the system response. This mechanism may have a potential application for optical amplification and for a plasmonic waveguide switch. (paper)

  18. Method of controlling coherent synchroton radiation-driven degradation of beam quality during bunch length compression

    Science.gov (United States)

    Douglas, David R [Newport News, VA; Tennant, Christopher D [Williamsburg, VA

    2012-07-10

    A method of avoiding CSR induced beam quality defects in free electron laser operation by a) controlling the rate of compression and b) using a novel means of integrating the compression with the remainder of the transport system: both are accomplished by means of dispersion modulation. A large dispersion is created in the penultimate dipole magnet of the compression region leading to rapid compression; this large dispersion is demagnified and dispersion suppression performed in a final small dipole. As a result, the bunch is short for only a small angular extent of the transport, and the resulting CSR excitation is small.

  19. Studies of imaging characteristics for a slab of a lossy left-handed material

    International Nuclear Information System (INIS)

    Shen Linfang; He Sailing

    2003-01-01

    The characteristics of an imaging system formed by a slab of a lossy left-handed material (LHM) are studied. The transfer function of the LHM imaging system is written in an appropriate product form with each term having a clear physical interpretation. A tiny loss of the LHM may suppress the transmission of evanescent waves through the LHM slab and this is explained physically. An analytical expression for the resolution of the imaging system is derived. It is shown that it is impossible to make a subwavelength imaging by using a realistic LHM imaging system unless the LHM slab is much thinner than the wavelength

  20. IPTV multicast with peer-assisted lossy error control

    Science.gov (United States)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  1. A novel method for estimating soil precompression stress from uniaxial confined compression tests

    DEFF Research Database (Denmark)

    Lamandé, Mathieu; Schjønning, Per; Labouriau, Rodrigo

    2017-01-01

    . Stress-strain curves were obtained by performing uniaxial, confined compression tests on undisturbed soil cores for three soil types at three soil water potentials. The new method performed better than the Gompertz fitting method in estimating precompression stress. The values of precompression stress...... obtained from the new method were linearly related to the maximum stress experienced by the soil samples prior to the uniaxial, confined compression test at each soil condition with a slope close to 1. Precompression stress determined with the new method was not related to soil type or dry bulk density......The concept of precompression stress is used for estimating soil strength of relevance to fieldtraffic. It represents the maximum stress experienced by the soil. The most recently developed fitting method to estimate precompression stress (Gompertz) is based on the assumption of an S-shape stress...

  2. HPLC-QTOF-MS method for quantitative determination of active compounds in an anti-cellulite herbal compress

    Directory of Open Access Journals (Sweden)

    Ngamrayu Ngamdokmai

    2017-08-01

    Full Text Available A herbal compress used in Thai massage has been modified for use in cellulite treatment. Its main active ingredients were ginger, black pepper, java long pepper, tea and coffee. The objective of this study was to develop and validate an HPLCQTOF-MS method for determining its active compounds, i.e., caffeine, 6-gingerol, and piperine in raw materials as well as in the formulation together with the flavouring agent, camphor. The four compounds were chromatographically separated. The analytical method was validated through selectivity, intra-, inter day precision, accuracy and matrix effect. The results showed that the herbal compress contained caffeine (2.16 mg/g, camphor (106.15 mg/g, 6-gingerol (0.76 mg/g, and piperine (4.19 mg/g. The chemical stability study revealed that herbal compresses retained >80% of their active compounds after 1 month of storage at ambient conditions. Our method can be used for quality control of the herbal compress and its raw materials.

  3. The Application of RPL Routing Protocol in Low Power Wireless Sensor and Lossy Networks

    Directory of Open Access Journals (Sweden)

    Xun Yang

    2014-05-01

    Full Text Available With the continuous development of computer information technology, wireless sensor has been successfully changed the mode of human life, at the same time, as one of the technologies continues to improve the future life, how to better integration with the RPL routing protocols together become one of research focuses in the current climate. This paper start from the wireless sensor network, briefly discusses the concept, followed by systematic exposition of RPL routing protocol developed background, relevant standards, working principle, topology and related terms, and finally explore the RPL routing protocol in wireless sensor low power lossy network applications.

  4. Effects of compression and individual variability on face recognition performance

    Science.gov (United States)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  5. Contextual Compression of Large-Scale Wind Turbine Array Simulations: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research

    2017-11-03

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interactive visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contexualized representation is a valid approach and encourages contextual data management.

  6. Slow-light enhanced absorption for bio-chemical sensing applications: potential of low-contrast lossy materials

    DEFF Research Database (Denmark)

    Pedersen, Jesper Goor; Xiao, Sanshui; Mortensen, Niels Asger

    2008-01-01

    Slow-light enhanced absorption in liquid-infiltrated photonic crystals has recently been proposed as a route to compensate for the reduced optical path in typical lab-on-a-chip systems for bio-chemical sensing applications. A simple perturbative expression has been applied to ideal structures...... composed of lossless dielectrics. In this work we study the enhancement in structures composed of lossy dielectrics such as a polymer. For this particular sensing application we find that the material loss has an unexpected limited drawback and surprisingly, it may even add to increase the bandwidth...

  7. Method for compression molding of thermosetting plastics utilizing a temperature gradient across the plastic to cure the article

    Science.gov (United States)

    Heier, W. C. (Inventor)

    1974-01-01

    A method is described for compression molding of thermosetting plastics composition. Heat is applied to the compressed load in a mold cavity and adjusted to hold molding temperature at the interface of the cavity surface and the compressed compound to produce a thermal front. This thermal front advances into the evacuated compound at mean right angles to the compression load and toward a thermal fence formed at the opposite surface of the compressed compound.

  8. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  9. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  10. On-board Data Processing to Lower Bandwidth Requirements on an Infrared Astronomy Satellite: Case of Herschel-PACS Camera

    Directory of Open Access Journals (Sweden)

    Christian Reimers

    2005-09-01

    Full Text Available This paper presents a new data compression concept, “on-board processing,” for infrared astronomy, where space observatories have limited processing resources. The proposed approach has been developed and tested for the PACS camera from the European Space Agency (ESA mission, Herschel. Using lossy and lossless compression, the presented method offers high compression ratio with a minimal loss of potentially useful scientific data. It also provides higher signal-to-noise ratio than that for standard compression techniques. Furthermore, the proposed approach presents low algorithmic complexity such that it is implementable on the resource-limited hardware. The various modules of the data compression concept are discussed in detail.

  11. An improved ghost-cell immersed boundary method for compressible flow simulations

    KAUST Repository

    Chi, Cheng

    2016-05-20

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. A sensor is introduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently in the Cartesian grid system. The improved ghost-cell method is validated against four test cases: (a) double Mach reflections on a ramp, (b) smooth Prandtl-Meyer expansion flows, (c) supersonic flows in a wind tunnel with a forward-facing step, and (d) supersonic flows over a circular cylinder. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Copyright © 2016 John Wiley & Sons, Ltd.

  12. A novel full-field experimental method to measure the local compressibility of gas diffusion media

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Yeh-Hung; Li, Yongqiang [Electrochemical Energy Research Lab, GM R and D, Honeoye Falls, NY 14472 (United States); Rock, Jeffrey A. [GM Powertrain, Honeoye Falls, NY 14472 (United States)

    2010-05-15

    The gas diffusion medium (GDM) in a proton exchange membrane (PEM) fuel cell needs to simultaneously satisfy the requirements of transporting reactant gases, removing product water, conducting electrons and heat, and providing mechanical support to the membrane electrode assembly (MEA). Concerning the localized over-compression which may force carbon fibers and other conductive debris into the membrane to cause fuel cell failure by electronically shorting through the membrane, we have developed a novel full-field experimental method to measure the local thickness and compressibility of GDM. Applying a uniform air pressure upon a thin polyimide film bonded on the top surface of the GDM with support from the bottom by a flat metal substrate and measuring the thickness change using the 3-D digital image correlation technique with an out-of-plane displacement resolution less than 0.5 {mu}m, we have determined the local thickness and compressive stress/strain behavior in the GDM. Using the local thickness and compressibility data over an area of 11.2 mm x 11.2 mm, we numerically construct the nominal compressive response of a commercial Toray trademark TGP-H-060 based GDM subjected to compression by flat platens. Good agreement in the nominal stress/strain curves from the numerical construction and direct experimental flat-platen measurement confirms the validity of the methodology proposed in this article. The result shows that a nominal pressure of 1.4 MPa compressed between two flat platens can introduce localized compressive stress concentration of more than 3 MPa in up to 1% of the total area at various locations from several hundred micrometers to 1 mm in diameter. We believe that this full-field experimental method can be useful in GDM material and process development to reduce the local hard spots and help to mitigate the membrane shorting failure in PEM fuel cells. (author)

  13. A novel full-field experimental method to measure the local compressibility of gas diffusion media

    Science.gov (United States)

    Lai, Yeh-Hung; Li, Yongqiang; Rock, Jeffrey A.

    The gas diffusion medium (GDM) in a proton exchange membrane (PEM) fuel cell needs to simultaneously satisfy the requirements of transporting reactant gases, removing product water, conducting electrons and heat, and providing mechanical support to the membrane electrode assembly (MEA). Concerning the localized over-compression which may force carbon fibers and other conductive debris into the membrane to cause fuel cell failure by electronically shorting through the membrane, we have developed a novel full-field experimental method to measure the local thickness and compressibility of GDM. Applying a uniform air pressure upon a thin polyimide film bonded on the top surface of the GDM with support from the bottom by a flat metal substrate and measuring the thickness change using the 3-D digital image correlation technique with an out-of-plane displacement resolution less than 0.5 μm, we have determined the local thickness and compressive stress/strain behavior in the GDM. Using the local thickness and compressibility data over an area of 11.2 mm × 11.2 mm, we numerically construct the nominal compressive response of a commercial Toray™ TGP-H-060 based GDM subjected to compression by flat platens. Good agreement in the nominal stress/strain curves from the numerical construction and direct experimental flat-platen measurement confirms the validity of the methodology proposed in this article. The result shows that a nominal pressure of 1.4 MPa compressed between two flat platens can introduce localized compressive stress concentration of more than 3 MPa in up to 1% of the total area at various locations from several hundred micrometers to 1 mm in diameter. We believe that this full-field experimental method can be useful in GDM material and process development to reduce the local hard spots and help to mitigate the membrane shorting failure in PEM fuel cells.

  14. Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Myoung Keon [Agency for Defense Development, Daejeon (Korea, Republic of); Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2016-10-15

    This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)

  15. Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method

    International Nuclear Information System (INIS)

    Lee, Myoung Keon; Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon

    2016-01-01

    This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)

  16. Properties of Sub-Wavelength Spherical Antennas With Arbitrarily Lossy Magnetodielectric Cores Approaching the Chu Lower Bound

    DEFF Research Database (Denmark)

    Hansen, Troels Vejle; Kim, Oleksiy S.; Breinbjerg, Olav

    2014-01-01

    For a spherical antenna exciting any arbitrary spherical mode, we derive exact closed-form expressions for the dissipated power and stored energy inside (and outside) the lossy magneto-dielectric spherical core, as well as the radiated power, radiation efficiency, and thus the radiation quality...... an increasing magnetic loss tangent initially leads to a decreasing radiation quality factor, but in the limit of a perfect magnetic conductor (PMC) core the dissipated power tends to zero and the radiation quality factor reaches the fundamental Chu lower bound....

  17. A method of automatic control of the process of compressing pyrogas in olefin production

    Energy Technology Data Exchange (ETDEWEB)

    Podval' niy, M.L.; Bobrovnikov, N.R.; Kotler, L.D.; Shib, L.M.; Tuchinskiy, M.R.

    1982-01-01

    In the known method of automatically controlling the process of compressing pyrogas in olefin production by regulating the supply of cooling agents to the interstage coolers of the compression unit depending on the flow of hydrocarbons to the compression unit, to raise performance by lowering deposition of polymers on the flow through surfaces of the equipment, the coolant supply is also regulated as a function of the flows of hydrocarbons from the upper and lower parts of the demethanizer and the bottoms of the stripping tower. The coolant supply is regulated proportional to the difference between the flow of stripping tower bottoms and the ratio of the hydrocarbon flow from the upper and lower parts of the demethanizer to the hydrocarbon flow in the compression unit. With an increase in the proportion of light hydrocarbons (sum of upper and lower demethanizer products) in the total flow of pyrogas going to compression, the flow of coolant to the compression unit is reduced. Condensation of the given fractions in the separators, their amount in condensate going through the piping to the stripping tower, is reduced. With the reduction in the proportion of light hydrocarbons in the pyrogas, the flow of coolant is increased, thus improving condensation of heavy hydrocarbons in the separators and removing them from the compression unit in the bottoms of the stripping tower.

  18. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    Science.gov (United States)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  19. Celiac disease biodetection using lossy-mode resonances generated in tapered single-mode optical fibers

    Science.gov (United States)

    Socorro, A. B.; Corres, J. M.; Del Villar, I.; Matias, I. R.; Arregui, F. J.

    2014-05-01

    This work presents the development and test of an anti-gliadin antibodies biosensor based on lossy mode resonances (LMRs) to detect celiac disease. Several polyelectrolites were used to perform layer-by-layer assembly processes in order to generate the LMR and to fabricate a gliadin-embedded thin-film. The LMR shifted 20 nm when immersed in a 5 ppm anti-gliadin antibodies-PBS solution, what makes this bioprobe suitable for detecting celiac disease. This is the first time, to our knowledge, that LMRs are used to detect celiac disease and these results suppose promising prospects on the use of such phenomena as biological detectors.

  20. The data embedding method

    Energy Technology Data Exchange (ETDEWEB)

    Sandford, M.T. II; Bradley, J.N.; Handel, T.G.

    1996-06-01

    Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in Microsoft{reg_sign} bitmap (.BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits, is termed {open_quote}steganography.{close_quote} Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or {open_quote}lossy{close_quote} compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is in data an analysis algorithm.

  1. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  2. Halftone Coding with JBIG2

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    2000-01-01

    of a halftone pattern dictionary.The decoder first decodes the gray-scale image. Then for each gray-scale pixel looks up the corresponding halftonepattern in the dictionary and places it in the reconstruction bitmap at the position corresponding to the gray-scale pixel. The coding method is inherently lossy......The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion...

  3. Genetic optimization of magneto-optic Kerr effect in lossy cavity-type magnetophotonic crystals

    Energy Technology Data Exchange (ETDEWEB)

    Ghanaatshoar, M., E-mail: m-ghanaat@cc.sbu.ac.i [Laser and Plasma Research Institute, Shahid Beheshti University, G.C., Evin 1983963113, Tehran (Iran, Islamic Republic of); Alisafaee, H. [Laser and Plasma Research Institute, Shahid Beheshti University, G.C., Evin 1983963113, Tehran (Iran, Islamic Republic of)

    2011-07-15

    We have demonstrated an optimization approach in order to obtain desired magnetophotonic crystals (MPCs) composed of a lossy magnetic layer (TbFeCo) placed within a multilayer structure. The approach is an amalgamation between a 4x4 transfer matrix method and a genetic algorithm. Our objective is to enhance the magneto-optic Kerr effect of TbFeCo at short visible wavelength of 405 nm. Through the optimization approach, MPC structures are found meeting definite criteria on the amount of reflectivity and Kerr rotation. The resulting structures are fitted more than 99.9% to optimization criteria. Computation of the internal electric field distribution shows energy localization in the vicinity of the magnetic layer, which is responsible for increased light-matter interaction and consequent enhanced magneto-optic Kerr effect. Versatility of our approach is also exhibited by examining and optimizing several MPC structures. - Research highlights: Structures comprising a highly absorptive TbFeCo layer are designed to work for data storage applications at 405 nm. Optimization algorithm resulted in structures fitted 99.9% to design criteria. More than 10 structures are found exhibiting magneto-optical response of about 1{sup o} rotation and 20% reflection. The ratio of the Kerr rotation to the Kerr ellipticity is enhanced by a factor of 30.

  4. Compressed sensing of ECG signal for wireless system with new fast iterative method.

    Science.gov (United States)

    Tawfic, Israa; Kayhan, Sema

    2015-12-01

    Recent experiments in wireless body area network (WBAN) show that compressive sensing (CS) is a promising tool to compress the Electrocardiogram signal ECG signal. The performance of CS is based on algorithms use to reconstruct exactly or approximately the original signal. In this paper, we present two methods work with absence and presence of noise, these methods are Least Support Orthogonal Matching Pursuit (LS-OMP) and Least Support Denoising-Orthogonal Matching Pursuit (LSD-OMP). The algorithms achieve correct support recovery without requiring sparsity knowledge. We derive an improved restricted isometry property (RIP) based conditions over the best known results. The basic procedures are done by observational and analytical of a different Electrocardiogram signal downloaded them from PhysioBankATM. Experimental results show that significant performance in term of reconstruction quality and compression rate can be obtained by these two new proposed algorithms, and help the specialist gathering the necessary information from the patient in less time if we use Magnetic Resonance Imaging (MRI) application, or reconstructed the patient data after sending it through the network. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Iterative methods for compressible Navier-Stokes and Euler equations

    Energy Technology Data Exchange (ETDEWEB)

    Tang, W.P.; Forsyth, P.A.

    1996-12-31

    This workshop will focus on methods for solution of compressible Navier-Stokes and Euler equations. In particular, attention will be focused on the interaction between the methods used to solve the non-linear algebraic equations (e.g. full Newton or first order Jacobian) and the resulting large sparse systems. Various types of block and incomplete LU factorization will be discussed, as well as stability issues, and the use of Newton-Krylov methods. These techniques will be demonstrated on a variety of model transonic and supersonic airfoil problems. Applications to industrial CFD problems will also be presented. Experience with the use of C++ for solution of large scale problems will also be discussed. The format for this workshop will be four fifteen minute talks, followed by a roundtable discussion.

  6. Survey Of Lossless Image Coding Techniques

    Science.gov (United States)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  7. Stabilization study on a wet-granule tableting method for a compression-sensitive benzodiazepine receptor agonist.

    Science.gov (United States)

    Fujita, Megumi; Himi, Satoshi; Iwata, Motokazu

    2010-03-01

    SX-3228, 6-benzyl-3-(5-methoxy-1,3,4-oxadiazol-2-yl)-5,6,7,8-tetrahydro-1,6-naphthyridin-2(1H)-one, is a newly-synthesized benzodiazepine receptor agonist intended to be developed as a tablet preparation. This compound, however, becomes chemically unstable due to decreased crystallinity when it undergoes mechanical treatments such as grinding and compression. A wet-granule tableting method, where wet granules are compressed before being dried, was therefore investigated as it has the advantage of producing tablets of sufficient hardness at quite low compression pressures. The results of the stability testing showed that the drug substance was chemically considerably more stable in wet-granule compression tablets compared to conventional tablets. Furthermore, the drug substance was found to be relatively chemically stable in wet-granule compression tablets even when high compression pressure was used and the effect of this pressure was small. After investigating the reason for this excellent stability, it became evident that near-isotropic pressure was exerted on the crystals of the drug substance because almost all the empty spaces in the tablets were occupied with water during the wet-granule compression process. Decreases in crystallinity of the drug substance were thus small, making the drug substance chemically stable in the wet-granule compression tablets. We believe that this novel approach could be useful for many other compounds that are destabilized by mechanical treatments.

  8. Technical Note: Validation of two methods to determine contact area between breast and compression paddle in mammography.

    Science.gov (United States)

    Branderhorst, Woutjan; de Groot, Jerry E; van Lier, Monique G J T B; Highnam, Ralph P; den Heeten, Gerard J; Grimbergen, Cornelis A

    2017-08-01

    To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. For a set of 300 breast compressions, we measured the contact areas between breast and paddle, both capacitively using a transparent foil with indium-tin-oxide (ITO) coating attached to the paddle, and retrospectively from the obtained mammograms using image processing software (Volpara Enterprise, algorithm version 1.5.2). A gold standard was obtained from video images of the compressed breast. During each compression, the breast was illuminated from the sides in order to create a dark shadow on the video image where the breast was in contact with the compression paddle. We manually segmented the shadows captured at the time of x-ray exposure and measured their areas. We found a strong correlation between the manual segmentations and the capacitive measurements [r = 0.989, 95% CI (0.987, 0.992)] and between the manual segmentations and the image processing software [r = 0.978, 95% CI (0.972, 0.982)]. Bland-Altman analysis showed a bias of -0.0038 dm 2 for the capacitive measurement (SD 0.0658, 95% limits of agreement [-0.1329, 0.1252]) and -0.0035 dm 2 for the image processing software [SD 0.0962, 95% limits of agreement (-0.1921, 0.1850)]. The size of the contact area between the paddle and the breast can be determined accurately and precisely, both in real-time using the capacitive method, and retrospectively using image processing software. This result is beneficial for scientific research, data analysis and quality control systems that depend on one of these two methods for determining the average pressure on the breast during mammographic compression. © 2017 Sigmascreening B.V. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  9. Meshless Method for Simulation of Compressible Flow

    Science.gov (United States)

    Nabizadeh Shahrebabak, Ebrahim

    In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow

  10. Eesti Ajaloomuuseumi, Maarjamäe lossi tallihoone renoveerimine = Renovation of the Estonian History Museum's Maarjamäe palace stables / Jaak Huimerind, Tarmo Piirmets ; kommenteerinud Kaido Kivi

    Index Scriptorium Estoniae

    Huimerind, Jaak, 1957-

    2015-01-01

    Eesti Ajaloomuuseumi Maarjamäe lossi renoveeritud tallihoone Tallinnas Pirita tee 66, valminud 2014. Arhitekt Jaak Huimerind (Studio Paralleel OÜ), sisearhitekt ja näituse kujundaja Tarmo Piirmets (Pink OÜ). Eesti Kultuurkapitali Arhitektuuri sihtkapitali renoveerimispreemia 2014

  11. Compression method of anastomosis of large intestines by implants with memory of shape: alternative to traditional sutures

    Directory of Open Access Journals (Sweden)

    F. Sh. Aliev

    2015-01-01

    Full Text Available Research objective. To prove experimentally the possibility of forming a compression colonic anastomoses using nickel-titanium devices in comparison with traditional methods of anastomosis. Materials and methods. In experimental studies the quality of the compression anastomosis of the colon in comparison with sutured and stapled anastomoses was performed. There were three experimental groups in mongrel dogs formed: in the 1st series (n = 30 compression anastomoses nickel-titanium implants were formed; in the 2nd (n = 25 – circular stapling anastomoses; in the 3rd (n = 25 – ligature way to Mateshuk– Lambert. In the experiment the physical durability, elasticity, and biological tightness, morphogenesis colonic anastomoses were studied. Results. Optimal sizes of compression devices are 32 × 18 and 28 × 15 mm with a wire diameter of 2.2 mm, the force of winding compression was 740 ± 180 g/mm2. Compression suture has a higher physical durability compared to stapled (W = –33.0; p < 0.05 and sutured (W = –28.0; p < 0.05, higher elasticity (p < 0.05 in all terms of tests and biological tightness since 3 days (p < 0.001 after surgery. The regularities of morphogenesis colonic anastomoses allocated by 4 periods of the regeneration of intestinal suture. Conclusion. Obtained experimental data of the use of compression anastomosis of the colon by the nickel-titanium devices are the convincing arguments for their clinical application. 

  12. Comparative Survey of Ultrasound Images Compression Methods Dedicated to a Tele-Echography Robotic System

    National Research Council Canada - National Science Library

    Delgorge, C

    2001-01-01

    .... For the purpose of this work, we selected seven compression methods : Fourier Transform, Discrete Cosine Transform, Wavelets, Quadtrees Transform, Fractals, Histogram Thresholding, and Run Length Coding...

  13. A Scalable Context-Aware Objective Function (SCAOF) of Routing Protocol for Agricultural Low-Power and Lossy Networks (RPAL).

    Science.gov (United States)

    Chen, Yibo; Chanet, Jean-Pierre; Hou, Kun-Mean; Shi, Hongling; de Sousa, Gil

    2015-08-10

    In recent years, IoT (Internet of Things) technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs) can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs) complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF) that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL) was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.

  14. A Scalable Context-Aware Objective Function (SCAOF of Routing Protocol for Agricultural Low-Power and Lossy Networks (RPAL

    Directory of Open Access Journals (Sweden)

    Yibo Chen

    2015-08-01

    Full Text Available In recent years, IoT (Internet of Things technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL, which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.

  15. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  16. A parallel finite-volume finite-element method for transient compressible turbulent flows with heat transfer

    International Nuclear Information System (INIS)

    Masoud Ziaei-Rad

    2010-01-01

    In this paper, a two-dimensional numerical scheme is presented for the simulation of turbulent, viscous, transient compressible flows in the simultaneously developing hydraulic and thermal boundary layer region. The numerical procedure is a finite-volume-based finite-element method applied to unstructured grids. This combination together with a new method applied for the boundary conditions allows for accurate computation of the variables in the entrance region and for a wide range of flow fields from subsonic to transonic. The Roe-Riemann solver is used for the convective terms, whereas the standard Galerkin technique is applied for the viscous terms. A modified κ-ε model with a two-layer equation for the near-wall region combined with a compressibility correction is used to predict the turbulent viscosity. Parallel processing is also employed to divide the computational domain among the different processors to reduce the computational time. The method is applied to some test cases in order to verify the numerical accuracy. The results show significant differences between incompressible and compressible flows in the friction coefficient, Nusselt number, shear stress and the ratio of the compressible turbulent viscosity to the molecular viscosity along the developing region. A transient flow generated after an accidental rupture in a pipeline was also studied as a test case. The results show that the present numerical scheme is stable, accurate and efficient enough to solve the problem of transient wall-bounded flow.

  17. Methodes spectrales paralleles et applications aux simulations de couches de melange compressibles

    OpenAIRE

    Male , Jean-Michel; Fezoui , Loula ,

    1993-01-01

    La resolution des equations de Navier-Stokes en methodes spectrales pour des ecoulements compressibles peut etre assez gourmande en temps de calcul. On etudie donc ici la parallelisation d'un tel algorithme et son implantation sur une machine massivement parallele, la connection-machine CM-2. La methode spectrale s'adapte bien aux exigences du parallelisme massif, mais l'un des outils de base de cette methode, la transformee de Fourier rapide (lorsqu'elle doit etre appliquee sur les deux dime...

  18. Nonlinear viscoelasticity of pre-compressed layered polymeric composite under oscillatory compression

    KAUST Repository

    Xu, Yangguang

    2018-05-03

    Describing nonlinear viscoelastic properties of polymeric composites when subjected to dynamic loading is essential for development of practical applications of such materials. An efficient and easy method to analyze nonlinear viscoelasticity remains elusive because the dynamic moduli (storage modulus and loss modulus) are not very convenient when the material falls into nonlinear viscoelastic range. In this study, we utilize two methods, Fourier transform and geometrical nonlinear analysis, to quantitatively characterize the nonlinear viscoelasticity of a pre-compressed layered polymeric composite under oscillatory compression. We discuss the influences of pre-compression, dynamic loading, and the inner structure of polymeric composite on the nonlinear viscoelasticity. Furthermore, we reveal the nonlinear viscoelastic mechanism by combining with other experimental results from quasi-static compressive tests and microstructural analysis. From a methodology standpoint, it is proved that both Fourier transform and geometrical nonlinear analysis are efficient tools for analyzing the nonlinear viscoelasticity of a layered polymeric composite. From a material standpoint, we consequently posit that the dynamic nonlinear viscoelasticity of polymeric composites with complicated inner structures can also be well characterized using these methods.

  19. Three dimensional simulation of compressible and incompressible flows through the finite element method

    International Nuclear Information System (INIS)

    Costa, Gustavo Koury

    2004-11-01

    Although incompressible fluid flows can be regarded as a particular case of a general problem, numerical methods and the mathematical formulation aimed to solve compressible and incompressible flows have their own peculiarities, in such a way, that it is generally not possible to attain both regimes with a single approach. In this work, we start from a typically compressible formulation, slightly modified to make use of pressure variables and, through augmenting the stabilising parameters, we end up with a simplified model which is able to deal with a wide range of flow regimes, from supersonic to low speed gas flows. The resulting methodology is flexible enough to allow for the simulation of liquid flows as well. Examples using conservative and pressure variables are shown and the results are compared to those published in the literature, in order to validate the method. (author)

  20. Development of a geopolymer solidification method for radioactive wastes by compression molding and heat curing

    International Nuclear Information System (INIS)

    Shimoda, Chiaki; Matsuyama, Kanae; Okabe, Hirofumi; Kaneko, Masaaki; Miyamoto, Shinya

    2017-01-01

    Geopolymer solidification is a good method for managing waste because of it is inexpensive as compared with vitrification and has a reduced risk of hydrogen generation. In general, when geopolymers are made, water is added to the geopolymer raw materials, and then the slurry is mixed, poured into a mold, and cured. However, it is difficult to control the reaction because, depending on the types of materials, the viscosity can immediately increase after mixing. Slurries of geopolymers easily attach to the agitating wing of the mixer and easily clog the plumbing during transportation. Moreover, during long-term storage of solidified wastes containing concentrated radionuclides in a sealed container without vents, the hydrogen concentration in the container increases over time. Therefore, a simple method using as little water as possible is needed. In this work, geopolymer solidification by compression molding was studied. As compared with the usual methods, it provides a simple and stable method for preparing waste for long-term storage. From investigations performed before and after solidification by compression molding, it was shown that the crystal structure changed. From this result, it was concluded that the geopolymer reaction proceeded during compression molding. This method (1) reduces the energy needed for drying, (2) has good workability, (3) reduces the overall volume, and (4) reduces hydrogen generation. (author)

  1. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  2. Compressed sensing method for human activity recognition using tri-axis accelerometer on mobile phone

    Institute of Scientific and Technical Information of China (English)

    Song Hui; Wang Zhongmin

    2017-01-01

    The diversity in the phone placements of different mobile users' dailylife increases the difficulty of recognizing human activities by using mobile phone accelerometer data.To solve this problem,a compressed sensing method to recognize human activities that is based on compressed sensing theory and utilizes both raw mobile phone accelerometer data and phone placement information is proposed.First,an over-complete dictionary matrix is constructed using sufficient raw tri-axis acceleration data labeled with phone placement information.Then,the sparse coefficient is evaluated for the samples that need to be tested by resolving L1 minimization.Finally,residual values are calculated and the minimum value is selected as the indicator to obtain the recognition results.Experimental results show that this method can achieve a recognition accuracy reaching 89.86%,which is higher than that of a recognition method that does not adopt the phone placement information for the recognition process.The recognition accuracy of the proposed method is effective and satisfactory.

  3. Methods for compressible multiphase flows and their applications

    Science.gov (United States)

    Kim, H.; Choe, Y.; Kim, H.; Min, D.; Kim, C.

    2018-06-01

    This paper presents an efficient and robust numerical framework to deal with multiphase real-fluid flows and their broad spectrum of engineering applications. A homogeneous mixture model incorporated with a real-fluid equation of state and a phase change model is considered to calculate complex multiphase problems. As robust and accurate numerical methods to handle multiphase shocks and phase interfaces over a wide range of flow speeds, the AUSMPW+_N and RoeM_N schemes with a system preconditioning method are presented. These methods are assessed by extensive validation problems with various types of equation of state and phase change models. Representative realistic multiphase phenomena, including the flow inside a thermal vapor compressor, pressurization in a cryogenic tank, and unsteady cavitating flow around a wedge, are then investigated as application problems. With appropriate physical modeling followed by robust and accurate numerical treatments, compressible multiphase flow physics such as phase changes, shock discontinuities, and their interactions are well captured, confirming the suitability of the proposed numerical framework to wide engineering applications.

  4. A theoretical global optimization method for vapor-compression refrigeration systems based on entransy theory

    International Nuclear Information System (INIS)

    Xu, Yun-Chao; Chen, Qun

    2013-01-01

    The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases

  5. Analysis of time integration methods for the compressible two-fluid model for pipe flow simulations

    NARCIS (Netherlands)

    B. Sanderse (Benjamin); I. Eskerud Smith (Ivar); M.H.W. Hendrix (Maurice)

    2017-01-01

    textabstractIn this paper we analyse different time integration methods for the two-fluid model and propose the BDF2 method as the preferred choice to simulate transient compressible multiphase flow in pipelines. Compared to the prevailing Backward Euler method, the BDF2 scheme has a significantly

  6. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  7. Retrofit device and method to improve humidity control of vapor compression cooling systems

    Science.gov (United States)

    Roth, Robert Paul; Hahn, David C.; Scaringe, Robert P.

    2016-08-16

    A method and device for improving moisture removal capacity of a vapor compression system is disclosed. The vapor compression system is started up with the evaporator blower initially set to a high speed. A relative humidity in a return air stream is measured with the evaporator blower operating at the high speed. If the measured humidity is above the predetermined high relative humidity value, the evaporator blower speed is reduced from the initially set high speed to the lowest possible speed. The device is a control board connected with the blower and uses a predetermined change in measured relative humidity to control the blower motor speed.

  8. Finite Element Analysis of Increasing Column Section and CFRP Reinforcement Method under Different Axial Compression Ratio

    Science.gov (United States)

    Jinghai, Zhou; Tianbei, Kang; Fengchi, Wang; Xindong, Wang

    2017-11-01

    Eight less stirrups in the core area frame joints are simulated by ABAQUS finite element numerical software. The composite reinforcement method is strengthened with carbon fiber and increasing column section, the axial compression ratio of reinforced specimens is 0.3, 0.45 and 0.6 respectively. The results of the load-displacement curve, ductility and stiffness are analyzed, and it is found that the different axial compression ratio has great influence on the bearing capacity of increasing column section strengthening method, and has little influence on carbon fiber reinforcement method. The different strengthening schemes improve the ultimate bearing capacity and ductility of frame joints in a certain extent, composite reinforcement joints strengthening method to improve the most significant, followed by increasing column section, reinforcement method of carbon fiber reinforced joints to increase the minimum.

  9. Method for Calculation of Steam-Compression Heat Transformers

    Directory of Open Access Journals (Sweden)

    S. V. Zditovetckaya

    2012-01-01

    Full Text Available The paper considers a method for joint numerical analysis of cycle parameters and heatex-change equipment of steam-compression heat transformer contour that takes into account a non-stationary operational mode and irreversible losses in devices and pipeline contour. The method has been realized in the form of the software package and can be used while making design or selection of a heat transformer with due account of a coolant and actual equipment being included in its structure.The paper presents investigation results revealing influence of pressure loss in an evaporator and a condenser from the side of the coolant caused by a friction and local resistance on power efficiency of the heat transformer which is operating in the mode of refrigerating and heating installation and a thermal pump. Actually obtained operational parameters of the thermal pump in the nominal and off-design operatinal modes depend on the structure of the concrete contour equipment.

  10. Stegano-Crypto Hiding Encrypted Data in Encrypted Image Using Advanced Encryption Standard and Lossy Algorithm

    Directory of Open Access Journals (Sweden)

    Ari Shawakat Tahir

    2015-12-01

    Full Text Available The Steganography is an art and science of hiding information by embedding messages within other, seemingly harmless messages and lots of researches are working in it. Proposed system is using AES Algorithm and Lossy technique to overcome the limitation of previous work and increasing the process’s speed. The sender uses AES Algorithm to encrypt message and image, then using LSB technique to hide encrypted data in encrypted message. The receive get the original data using the keys that had been used in encryption process. The proposed system has been implemented in NetBeans 7.3 software uses image and data in different size to find the system’s speed.

  11. Method for data compression by associating complex numbers with files of data values

    Science.gov (United States)

    Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur

    1998-02-10

    A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.

  12. Graphene Oxide in Lossy Mode Resonance-Based Optical Fiber Sensors for Ethanol Detection

    Directory of Open Access Journals (Sweden)

    Miguel Hernaez

    2017-12-01

    Full Text Available The influence of graphene oxide (GO over the features of an optical fiber ethanol sensor based on lossy mode resonances (LMR has been studied in this work. Four different sensors were built with this aim, each comprising a multimode optical fiber core fragment coated with a SnO2 thin film. Layer by layer (LbL coatings made of 1, 2 and 4 bilayers of polyethyleneimine (PEI and graphene oxide were deposited onto three of these devices and their behavior as aqueous ethanol sensors was characterized and compared with the sensor without GO. The sensors with GO showed much better performance with a maximum sensitivity enhancement of 176% with respect to the sensor without GO. To our knowledge, this is the first time that GO has been used to make an optical fiber sensor based on LMR.

  13. Graphene Oxide in Lossy Mode Resonance-Based Optical Fiber Sensors for Ethanol Detection.

    Science.gov (United States)

    Hernaez, Miguel; Mayes, Andrew G; Melendi-Espina, Sonia

    2017-12-27

    The influence of graphene oxide (GO) over the features of an optical fiber ethanol sensor based on lossy mode resonances (LMR) has been studied in this work. Four different sensors were built with this aim, each comprising a multimode optical fiber core fragment coated with a SnO₂ thin film. Layer by layer (LbL) coatings made of 1, 2 and 4 bilayers of polyethyleneimine (PEI) and graphene oxide were deposited onto three of these devices and their behavior as aqueous ethanol sensors was characterized and compared with the sensor without GO. The sensors with GO showed much better performance with a maximum sensitivity enhancement of 176% with respect to the sensor without GO. To our knowledge, this is the first time that GO has been used to make an optical fiber sensor based on LMR.

  14. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  15. Delay reduction in lossy intermittent feedback for generalized instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2013-10-01

    In this paper, we study the effect of lossy intermittent feedback loss events on the multicast decoding delay performance of generalized instantly decodable network coding. These feedback loss events create uncertainty at the sender about the reception statues of different receivers and thus uncertainty to accurately determine subsequent instantly decodable coded packets. To solve this problem, we first identify the different possibilities of uncertain packets at the sender and their probabilities. We then derive the expression of the mean decoding delay. We formulate the Generalized Instantly Decodable Network Coding (G-IDNC) minimum decoding delay problem as a maximum weight clique problem. Since finding the optimal solution is NP-hard, we design a variant of the algorithm employed in [1]. Our algorithm is compared to the two blind graph update proposed in [2] through extensive simulations. Results show that our algorithm outperforms the blind approaches in all the situations and achieves a tolerable degradation, against the perfect feedback, for large feedback loss period. © 2013 IEEE.

  16. Delay reduction in lossy intermittent feedback for generalized instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Ai-Naffouri, Tareq Y.

    2013-01-01

    In this paper, we study the effect of lossy intermittent feedback loss events on the multicast decoding delay performance of generalized instantly decodable network coding. These feedback loss events create uncertainty at the sender about the reception statues of different receivers and thus uncertainty to accurately determine subsequent instantly decodable coded packets. To solve this problem, we first identify the different possibilities of uncertain packets at the sender and their probabilities. We then derive the expression of the mean decoding delay. We formulate the Generalized Instantly Decodable Network Coding (G-IDNC) minimum decoding delay problem as a maximum weight clique problem. Since finding the optimal solution is NP-hard, we design a variant of the algorithm employed in [1]. Our algorithm is compared to the two blind graph update proposed in [2] through extensive simulations. Results show that our algorithm outperforms the blind approaches in all the situations and achieves a tolerable degradation, against the perfect feedback, for large feedback loss period. © 2013 IEEE.

  17. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  18. A method for predicting the impact velocity of a projectile fired from a compressed air gun facility

    International Nuclear Information System (INIS)

    Attwood, G.J.

    1988-03-01

    This report describes the development and use of a method for calculating the velocity at impact of a projectile fired from a compressed air gun. The method is based on a simple but effective approach which has been incorporated into a computer program. The method was developed principally for use with the Horizontal Impact Facility at AEE Winfrith but has been adapted so that it can be applied to any compressed air gun of a similar design. The method has been verified by comparison of predicted velocities with test data and the program is currently being used in a predictive manner to specify test conditions for the Horizontal Impact Facility at Winfrith. (author)

  19. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  20. Research of Block-Based Motion Estimation Methods for Video Compression

    Directory of Open Access Journals (Sweden)

    Tropchenko Andrey

    2016-08-01

    Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.

  1. Key technical issues associated with a method of pulse compression. Final technical report

    International Nuclear Information System (INIS)

    Hunter, R.O. Jr.

    1980-06-01

    Key technical issues for angular multiplexing as a method of pulse compression in a 100 KJ KrF laser have been studied. Environmental issues studied include seismic vibrations man-made vibrations, air propagation, turbulence, and thermal gradient-induced density fluctuations. These studies have been incorporated in the design of mirror mounts and an alignment system, both of which are reported. A design study and performance analysis of the final amplifier have been undertaken. The pulse compression optical train has been designed and assessed as to its performance. Individual components are described and analytical relationships between the optical component size, surface quality, damage threshold and final focus properties are derived. The optical train primary aberrations are obtained and a method for aberration minimization is presented. Cost algorithms for the mirrors, mounts, and electrical hardware are integrated into a cost model to determine system costs as a function of pulse length, aperture size, and spot size

  2. A Novel CAE Method for Compression Molding Simulation of Carbon Fiber-Reinforced Thermoplastic Composite Sheet Materials

    Directory of Open Access Journals (Sweden)

    Yuyang Song

    2018-06-01

    Full Text Available Its high-specific strength and stiffness with lower cost make discontinuous fiber-reinforced thermoplastic (FRT materials an ideal choice for lightweight applications in the automotive industry. Compression molding is one of the preferred manufacturing processes for such materials as it offers the opportunity to maintain a longer fiber length and higher volume production. In the past, we have demonstrated that compression molding of FRT in bulk form can be simulated by treating melt flow as a continuum using the conservation of mass and momentum equations. However, the compression molding of such materials in sheet form using a similar approach does not work well. The assumption of melt flow as a continuum does not hold for such deformation processes. To address this challenge, we have developed a novel simulation approach. First, the draping of the sheet was simulated as a structural deformation using the explicit finite element approach. Next, the draped shape was compressed using fluid mechanics equations. The proposed method was verified by building a physical part and comparing the predicted fiber orientation and warpage measurements performed on the physical parts. The developed method and tools are expected to help in expediting the development of FRT parts, which will help achieve lightweight targets in the automotive industry.

  3. Fast lossless compression via cascading Bloom filters.

    Science.gov (United States)

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  4. Triaxial- and uniaxial-compression testing methods developed for extraction of pore water from unsaturated tuff, Yucca Mountain, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Mower, T.E.; Higgins, J.D. [Colorado School of Mines, Golden, CO (USA). Dept. of Geology and Geological Engineering; Yang, I.C. [Geological Survey, Denver, CO (USA). Water Resources Div.

    1989-12-31

    To support the study of hydrologic system in the unsaturated zone at Yucca Mountain, Nevada, two extraction methods were examined to obtain representative, uncontaminated pore-water samples from unsaturated tuff. Results indicate that triaxial compression, which uses a standard cell, can remove pore water from nonwelded tuff that has an initial moisture content greater than 11% by weight; uniaxial compression, which uses a specifically fabricated cell, can extract pore water from nonwelded tuff that has an initial moisture content greater than 8% and from welded tuff that has an initial moisture content greater than 6.5%. For the ambient moisture conditions of Yucca Mountain tuffs, uniaxial compression is the most efficient method of pore-water extraction. 12 refs., 7 figs., 2 tabs.

  5. Compressible flow modelling in unstructured mesh topologies using numerical methods developed for incompressible flows

    International Nuclear Information System (INIS)

    Caruso, A.; Mechitoua, N.; Duplex, J.

    1995-01-01

    The R and D thermal hydraulic codes, notably the finite difference codes Melodie (2D) and ESTET (3D) or the 2D and 3D versions of the finite element code N3S were initially developed for incompressible, possibly dilatable, turbulent flows, i.e. those where density is not pressure-dependent. Subsequent minor modifications to these finite difference code algorithms enabled extension of their scope to subsonic compressible flows. The first applications in both single-phase and two flow contexts have now been completed. This paper presents the techniques used to adapt these algorithms for the processing of compressible flows in an N3S type finite element code, whereby complex geometries normally difficult to model in finite difference meshes could be successfully dealt with. The development of version 3.0 of he N3S code led to dilatable flow calculations at lower cost. On this basis, a 2-D prototype version of N3S was programmed, tested and validated, drawing maximum benefit from Cray vectorization possibilities and from physical, numerical or data processing experience with other fluid dynamics codes, such as Melodie, ESTET or TELEMAC. The algorithms are the same as those used in finite difference codes, but their formulation is variational. The first part of the paper deals with the fundamental equations involved, expressed in basic form, together with the associated digital method. The modifications to the k-epsilon turbulence model extended to compressible flows are also described. THe second part presents the algorithm used, indicating the additional terms required by the extension. The third part presents the equations in integral form and the associated matrix systems. The solutions adopted for calculation of the compressibility related terms are indicated. Finally, a few representative applications and test cases are discussed. These include subsonic, but also transsonic and supersonic cases, showing the shock responses of the digital method. The application of

  6. Computer calculations of compressibility of natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Abou-Kassem, J.H.; Mattar, L.; Dranchuk, P.M

    An alternative method for the calculation of pseudo reduced compressibility of natural gas is presented. The method is incorporated into the routines by adding a single FORTRAN statement before the RETURN statement. The method is suitable for computer and hand-held calculator applications. It produces the same reduced compressibility as other available methods but is computationally superior. Tabular definitions of coefficients and comparisons of predicted pseudo reduced compressibility using different methods are presented, along with appended FORTRAN subroutines. 7 refs., 2 tabs.

  7. Verlustbehaftete und verlustlose Datenkomprimierung fur Daten von Hochenergiephysik Experimenten

    CERN Document Server

    Patauner, Christian; Marchioro, Alessandro

    2011-01-01

    This dissertation describes a data compression system optimized for high energy particle detectors. The aim is to reduce the data in the front-end electronics installed in different kind of particle detectors as used for example along the Large Hadron Collider at CERN. The signals collected and digitized from calorimeters, time projection chambers and in general from all detectors with a large number of sensors produce an extensive amount of data, which need to be reduced. Real-time data compression algorithms applied right at the detector front-end is able to reduce the amount of data to be transmitted and stored as early as possible in the data chain. Different lossless and lossy compression methods are analyzed regarding their efficiency in compressing data from particle detectors that produce signals amplified and/or shaped to various waveforms. In addition, the complexity of the algorithms is evaluated to determine their suitability for a real-time hardware implementation. From the analyzed methods, a ne...

  8. A practical discrete-adjoint method for high-fidelity compressible turbulence simulations

    International Nuclear Information System (INIS)

    Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.

    2015-01-01

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space–time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge–Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that

  9. A Method to Detect AAC Audio Forgery

    Directory of Open Access Journals (Sweden)

    Qingzhong Liu

    2015-08-01

    Full Text Available Advanced Audio Coding (AAC, a standardized lossy compression scheme for digital audio, which was designed to be the successor of the MP3 format, generally achieves better sound quality than MP3 at similar bit rates. While AAC is also the default or standard audio format for many devices and AAC audio files may be presented as important digital evidences, the authentication of the audio files is highly needed but relatively missing. In this paper, we propose a scheme to expose tampered AAC audio streams that are encoded at the same encoding bit-rate. Specifically, we design a shift-recompression based method to retrieve the differential features between the re-encoded audio stream at each shifting and original audio stream, learning classifier is employed to recognize different patterns of differential features of the doctored forgery files and original (untouched audio files. Experimental results show that our approach is very promising and effective to detect the forgery of the same encoding bit-rate on AAC audio streams. Our study also shows that shift recompression-based differential analysis is very effective for detection of the MP3 forgery at the same bit rate.

  10. Comparison of the effectiveness of compression stockings and layer compression systems in venous ulceration treatment

    Science.gov (United States)

    Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina

    2010-01-01

    Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941

  11. A Comparative Study of Compression Methods and the Development of CODEC Program of Biological Signal for Emergency Telemedicine Service

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, T.S.; Kim, J.S. [Changwon National University, Changwon (Korea); Lim, Y.H. [Visionite Co., Ltd., Seoul (Korea); Yoo, S.K. [Yonsei University, Seoul (Korea)

    2003-05-01

    In an emergency telemedicine system such as the High-quality Multimedia based Real-time Emergency Telemedicine(HMRET) service, it is very important to examine the status of the patient continuously using the multimedia data including the biological signals(ECG, BP, Respiration, S{sub p}O{sub 2}) of the patient. In order to transmit these data real time through the communication means which have the limited transmission capacity, it is also necessary to compress the biological data besides other multimedia data. For this purpose, we investigate and compare the ECG compression techniques in the time domain and in the wavelet transform domain, and present an effective lossless compression method of the biological signals using JPEG Huffman table for an emergency telemedicine system. And, for the HMRET service, we developed the lossless compression and reconstruction program of the biological signals in MSVC++ 6.0 using DPCM method and JPEG Huffman table, and tested in an internet environment. (author). 15 refs., 17 figs., 7 tabs.

  12. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  13. The direct Discontinuous Galerkin method for the compressible Navier-Stokes equations on arbitrary grids

    Science.gov (United States)

    Yang, Xiaoquan; Cheng, Jian; Liu, Tiegang; Luo, Hong

    2015-11-01

    The direct discontinuous Galerkin (DDG) method based on a traditional discontinuous Galerkin (DG) formulation is extended and implemented for solving the compressible Navier-Stokes equations on arbitrary grids. Compared to the widely used second Bassi-Rebay (BR2) scheme for the discretization of diffusive fluxes, the DDG method has two attractive features: first, it is simple to implement as it is directly based on the weak form, and therefore there is no need for any local or global lifting operator; second, it can deliver comparable results, if not better than BR2 scheme, in a more efficient way with much less CPU time. Two approaches to perform the DDG flux for the Navier- Stokes equations are presented in this work, one is based on conservative variables, the other is based on primitive variables. In the implementation of the DDG method for arbitrary grid, the definition of mesh size plays a critical role as the formation of viscous flux explicitly depends on the geometry. A variety of test cases are presented to demonstrate the accuracy and efficiency of the DDG method for discretizing the viscous fluxes in the compressible Navier-Stokes equations on arbitrary grids.

  14. The production of fully deacetylated chitosan by compression method

    Directory of Open Access Journals (Sweden)

    Xiaofei He

    2016-03-01

    Full Text Available Chitosan’s activities are significantly affected by degree of deacetylation (DDA, while fully deacetylated chitosan is difficult to produce in a large scale. Therefore, this paper introduces a compression method for preparing 100% deacetylated chitosan with less environmental pollution. The product is characterized by XRD, FT-IR, UV and HPLC. The 100% fully deacetylated chitosan is produced in low-concentration alkali and high-pressure conditions, which only requires 15% alkali solution and 1:10 chitosan powder to NaOH solution ratio under 0.11–0.12 MPa for 120 min. When the alkali concentration varied from 5% to 15%, the chitosan with ultra-high DDA value (up to 95% is produced.

  15. A comparative analysis of the cryo-compression and cryo-adsorption hydrogen storage methods

    Energy Technology Data Exchange (ETDEWEB)

    Petitpas, G [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Benard, P [Universite du Quebec a Trois-Rivieres (Canada); Klebanoff, L E [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Xiao, J [Universite du Quebec a Trois-Rivieres (Canada); Aceves, S M [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-07-01

    While conventional low-pressure LH₂ dewars have existed for decades, advanced methods of cryogenic hydrogen storage have recently been developed. These advanced methods are cryo-compression and cryo-adsorption hydrogen storage, which operate best in the temperature range 30–100 K. We present a comparative analysis of both approaches for cryogenic hydrogen storage, examining how pressure and/or sorbent materials are used to effectively increase onboard H₂ density and dormancy. We start by reviewing some basic aspects of LH₂ properties and conventional means of storing it. From there we describe the cryo-compression and cryo-adsorption hydrogen storage methods, and then explore the relationship between them, clarifying the materials science and physics of the two approaches in trying to solve the same hydrogen storage task (~5–8 kg H₂, typical of light duty vehicles). Assuming that the balance of plant and the available volume for the storage system in the vehicle are identical for both approaches, the comparison focuses on how the respective storage capacities, vessel weight and dormancy vary as a function of temperature, pressure and type of cryo-adsorption material (especially, powder MOF-5 and MIL-101). By performing a comparative analysis, we clarify the science of each approach individually, identify the regimes where the attributes of each can be maximized, elucidate the properties of these systems during refueling, and probe the possible benefits of a combined “hybrid” system with both cryo-adsorption and cryo-compression phenomena operating at the same time. In addition the relationships found between onboard H₂ capacity, pressure vessel and/or sorbent mass and dormancy as a function of rated pressure, type of sorbent material and fueling conditions are useful as general designing guidelines in future engineering efforts using these two hydrogen storage approaches.

  16. Performance evaluation of a lossy transmission lines based diode detector at cryogenic temperature.

    Science.gov (United States)

    Villa, E; Aja, B; de la Fuente, L; Artal, E

    2016-01-01

    This work is focused on the design, fabrication, and performance analysis of a square-law Schottky diode detector based on lossy transmission lines working under cryogenic temperature (15 K). The design analysis of a microwave detector, based on a planar gallium-arsenide low effective Schottky barrier height diode, is reported, which is aimed for achieving large input return loss as well as flat sensitivity versus frequency. The designed circuit demonstrates good sensitivity, as well as a good return loss in a wide bandwidth at Ka-band, at both room (300 K) and cryogenic (15 K) temperatures. A good sensitivity of 1000 mV/mW and input return loss better than 12 dB have been achieved when it works as a zero-bias Schottky diode detector at room temperature, increasing the sensitivity up to a minimum of 2200 mV/mW, with the need of a DC bias current, at cryogenic temperature.

  17. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  18. Fraktální komprese časových řad

    OpenAIRE

    Lysík, Martin

    2009-01-01

    The aim of this work was looking for single dimensional distributions of fractals in real world time series and use them to compress these time series. Usability of these principles for both lossless and lossy compression was examined. Base on the problem analysis was as first designed and implemented the basic compression algorithm. This was progressively extended with simple heuristics for better performance and also other techniques, which should have reduced its deficiencies. As the resul...

  19. A Schur complement method for compressible two-phase flow models

    International Nuclear Information System (INIS)

    Dao, Thu-Huyen; Ndjinga, Michael; Magoules, Frederic

    2014-01-01

    In this paper, we will report our recent efforts to apply a Schur complement method for nonlinear hyperbolic problems. We use the finite volume method and an implicit version of the Roe approximate Riemann solver. With the interface variable introduced in [4] in the context of single phase flows, we are able to simulate two-fluid models ([12]) with various schemes such as upwind, centered or Rusanov. Moreover, we introduce a scaling strategy to improve the condition number of both the interface system and the local systems. Numerical results for the isentropic two-fluid model and the compressible Navier-Stokes equations in various 2D and 3D configurations and various schemes show that our method is robust and efficient. The scaling strategy considerably reduces the number of GMRES iterations in both interface system and local system resolutions. Comparisons of performances with classical distributed computing with up to 218 processors are also reported. (authors)

  20. A discrete fibre dispersion method for excluding fibres under compression in the modelling of fibrous tissues.

    Science.gov (United States)

    Li, Kewei; Ogden, Ray W; Holzapfel, Gerhard A

    2018-01-01

    Recently, micro-sphere-based methods derived from the angular integration approach have been used for excluding fibres under compression in the modelling of soft biological tissues. However, recent studies have revealed that many of the widely used numerical integration schemes over the unit sphere are inaccurate for large deformation problems even without excluding fibres under compression. Thus, in this study, we propose a discrete fibre dispersion model based on a systematic method for discretizing a unit hemisphere into a finite number of elementary areas, such as spherical triangles. Over each elementary area, we define a representative fibre direction and a discrete fibre density. Then, the strain energy of all the fibres distributed over each elementary area is approximated based on the deformation of the representative fibre direction weighted by the corresponding discrete fibre density. A summation of fibre contributions over all elementary areas then yields the resultant fibre strain energy. This treatment allows us to exclude fibres under compression in a discrete manner by evaluating the tension-compression status of the representative fibre directions only. We have implemented this model in a finite-element programme and illustrate it with three representative examples, including simple tension and simple shear of a unit cube, and non-homogeneous uniaxial extension of a rectangular strip. The results of all three examples are consistent and accurate compared with the previously developed continuous fibre dispersion model, and that is achieved with a substantial reduction of computational cost. © 2018 The Author(s).

  1. EFFECTIVENESS OF ADJUVANT USE OF POSTERIOR MANUAL COMPRESSION WITH GRADED COMPRESSION IN THE SONOGRAPHIC DIAGNOSIS OF ACUTE APPENDICITIS

    Directory of Open Access Journals (Sweden)

    Senthilnathan V

    2018-01-01

    Full Text Available BACKGROUND Diagnosing appendicitis by Graded Compression Ultrasonogram is a difficult task because of limiting factors such as operator– dependent technique, retrocaecal location of the appendix and patient obesity. Posterior manual compression technique visualizes the appendix better in the Grey-scale Ultrasonogram. The Aim of this study is to determine the accuracy of ultrasound in detecting or excluding acute appendicitis and to evaluate the usefulness of the adjuvant use of posterior manual compression technique in visualization of the appendix and in the diagnosis of acute appendicitis MATERIALS AND METHODS This prospective study involved a total of 240 patients in all age groups and both sexes. All these patients underwent USG for suspected appendicitis. Ultrasonography was performed with transverse and longitudinal graded compression sonography. If the appendix is not visualized on graded compression sonography, posterior manual compression technique was used to further improve the detection of appendix. RESULTS The vermiform appendix was visualized in 185 patients (77.1% out of 240 patients with graded compression alone. 55 out of 240 patients whose appendix could not be visualized by graded compression alone were subjected to both graded followed by posterior manual compression technique among that Appendix was visualized in 43 patients on posterior manual compression technique amounting to 78.2% of cases, Appendix could not be visualized in the remaining 12 patients (21.8% out of 55. CONCLUSION Combined method of graded compression with posterior manual compression technique is better than the graded compression technique alone in diagnostic accuracy and detection rate of the vermiform appendix.

  2. Deriving average soliton equations with a perturbative method

    International Nuclear Information System (INIS)

    Ballantyne, G.J.; Gough, P.T.; Taylor, D.P.

    1995-01-01

    The method of multiple scales is applied to periodically amplified, lossy media described by either the nonlinear Schroedinger (NLS) equation or the Korteweg--de Vries (KdV) equation. An existing result for the NLS equation, derived in the context of nonlinear optical communications, is confirmed. The method is then applied to the KdV equation and the result is confirmed numerically

  3. Chapter 22: Compressed Air Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Benton, Nathanael [Nexant, Inc., San Francisco, CA (United States); Burns, Patrick [Nexant, Inc., San Francisco, CA (United States)

    2017-10-18

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressor replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

  4. Assessment of high-resolution methods for numerical simulations of compressible turbulence with shock waves

    International Nuclear Information System (INIS)

    Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.

    2010-01-01

    Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.

  5. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  6. Compresso: Efficient Compression of Segmentation Data for Connectomics

    KAUST Repository

    Matejek, Brian

    2017-09-03

    Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression for efficient storage and transfer. General-purpose compression methods are less effective because the label data consists of large low-frequency regions with structured boundaries unlike natural image data. We present Compresso, a new compression scheme for label data that outperforms existing approaches by using a sliding window to exploit redundancy across border regions in 2D and 3D. We compare our method to existing compression schemes and provide a detailed evaluation on eleven biomedical and image segmentation datasets. Our method provides a factor of 600–2200x compression for label volumes, with running times suitable for practice.

  7. Investigation of GDL compression effects on the performance of a PEM fuel cell cathode by lattice Boltzmann method

    Science.gov (United States)

    Molaeimanesh, G. R.; Nazemian, M.

    2017-08-01

    Proton exchange membrane (PEM) fuel cells with a great potential for application in vehicle propulsion systems will have a promising future. However, to overcome the exiting challenges against their wider commercialization further fundamental research is inevitable. The effects of gas diffusion layer (GDL) compression on the performance of a PEM fuel cell is not well-recognized; especially, via pore-scale simulation technique capturing the fibrous microstructure of the GDL. In the current investigation, a stochastic microstructure reconstruction method is proposed which can capture GDL microstructure changes by compression. Afterwards, lattice Boltzmann pore-scale simulation technique is adopted to simulate the reactive gas flow through 10 different cathode electrodes with dissimilar carbon paper GDLs produced from five different compression levels and two different carbon fiber diameters. The distributions of oxygen mole fraction, water vapor mole fraction and current density for the simulated cases are presented and analyzed. The results of simulations demonstrate that when the fiber diameter is 9 μm adding compression leads to lower average current density while when the fiber diameter is 7 μm the compression effect is not monotonic.

  8. Stress analysis of shear/compression test

    International Nuclear Information System (INIS)

    Nishijima, S.; Okada, T.; Ueno, S.

    1997-01-01

    Stress analysis has been made on the glass fiber reinforced plastics (GFRP) subjected to the combined shear and compression stresses by means of finite element method. The two types of experimental set up were analyzed, that is parallel and series method where the specimen were compressed by tilted jigs which enable to apply the combined stresses, to the specimen. Modified Tsai-Hill criterion was employed to judge the failure under the combined stresses that is the shear strength under the compressive stress. The different failure envelopes were obtained between the two set ups. In the parallel system the shear strength once increased with compressive stress then decreased. On the contrary in the series system the shear strength decreased monotonicly with compressive stress. The difference is caused by the different stress distribution due to the different constraint conditions. The basic parameters which control the failure under the combined stresses will be discussed

  9. A Space-Frequency Data Compression Method for Spatially Dense Laser Doppler Vibrometer Measurements

    Directory of Open Access Journals (Sweden)

    José Roberto de França Arruda

    1996-01-01

    Full Text Available When spatially dense mobility shapes are measured with scanning laser Doppler vibrometers, it is often impractical to use phase-separation modal parameter estimation methods due to the excessive number of highly coupled modes and to the prohibitive computational cost of processing huge amounts of data. To deal with this problem, a data compression method using Chebychev polynomial approximation in the frequency domain and two-dimensional discrete Fourier series approximation in the spatial domain, is proposed in this article. The proposed space-frequency regressive approach was implemented and verified using a numerical simulation of a free-free-free-free suspended rectangular aluminum plate. To make the simulation more realistic, the mobility shapes were synthesized by modal superposition using mode shapes obtained experimentally with a scanning laser Doppler vibrometer. A reduced and smoothed model, which takes advantage of the sinusoidal spatial pattern of structural mobility shapes and the polynomial frequency-domain pattern of the mobility shapes, is obtained. From the reduced model, smoothed curves with any desired frequency and spatial resolution can he produced whenever necessary. The procedure can he used either to generate nonmodal models or to compress the measured data prior to modal parameter extraction.

  10. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    Science.gov (United States)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  11. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  12. A Proposal for Kelly CriterionBased Lossy Network Compression

    Science.gov (United States)

    2016-03-01

    detection applications. Most of these applications only send alerts to the central analysis servers. These alerts do not provide the forensic capability...based intrusion detection systems. These systems tend to examine the indi- vidual system’s audit logs looking for intrusive activity. The notable

  13. The study of lossy compressive method with different interpolation for holographic reconstruction in optical scanning holography

    Directory of Open Access Journals (Sweden)

    HU Zhijuan

    2015-08-01

    Full Text Available We study the cosmological inflation models driven by the rolling tachyon field which has a Born-Infeld-type action.We drive the Hamilton-Jacobi equation for the cosmological dynamics of tachyon inflation and the mode equations for the scalar and tensor perturbations of tachyon field and spacetime, then a solution under the slow-roll condition is given. In the end,a realistic model from string theory is discussed.

  14. Discrete Wigner Function Reconstruction and Compressed Sensing

    OpenAIRE

    Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin

    2011-01-01

    A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

  15. Schlieren method diagnostics of plasma compression in front of coaxial gun

    International Nuclear Information System (INIS)

    Kravarik, J.; Kubes, P.; Hruska, J.; Bacilek, J.

    1983-01-01

    The schlieren method employing a movable knife edge placed in the focal plane of a laser beam was used for the diagnostics of plasma produced by a coaxial plasma gun. When compared with the interferometric method reported earlier, spatial resolution was improved by more than one order of magnitude. In the determination of electron density near the gun orifice, spherical symmetry of the current sheath inhomogeneities and cylindrical symmetry of the compression maximum were assumed. Radial variation of electron density could be reconstructed from the photometric measurements of the transversal variation of schlieren light intensity. Due to small plasma dimensions, electron density was determined directly from the knife edge shift necessary for shadowing the corresponding part of the picture. (J.U.)

  16. n-Gram-Based Text Compression

    Science.gov (United States)

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  17. n-Gram-Based Text Compression

    Directory of Open Access Journals (Sweden)

    Vu H. Nguyen

    2016-01-01

    Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  18. Shock absorbing properties of toroidal shells under compression, 3

    International Nuclear Information System (INIS)

    Sugita, Yuji

    1985-01-01

    The author has previously presented the static load-deflection relations of a toroidal shell subjected to axisymmetric compression between rigid plates and those of its outer half when subjected to lateral compression. In both these cases, the analytical method was based on the incremental Rayleigh-Ritz method. In this paper, the effects of compression angle and strain rate on the load-deflection relations of the toroidal shell are investigated for its use as a shock absorber for the radioactive material shipping cask which must keep its structural integrity even after accidental falls at any angle. Static compression tests have been carried out at four angles of compression, 10 0 , 20 0 , 50 0 , 90 0 and the applications of the preceding analytical method have been discussed. Dynamic compression tests have also been performed using the free-falling drop hammer. The results are compared with those in the static compression tests. (author)

  19. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  20. Isostatic compression of buffer blocks. Middle scale

    International Nuclear Information System (INIS)

    Ritola, J.; Pyy, E.

    2012-01-01

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  1. Isostatic compression of buffer blocks. Middle scale

    Energy Technology Data Exchange (ETDEWEB)

    Ritola, J.; Pyy, E. [VTT Technical Research Centre of Finland, Espoo (Finland)

    2012-01-15

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  2. Graph Compression by BFS

    Directory of Open Access Journals (Sweden)

    Alberto Apostolico

    2009-08-01

    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  3. A comparison of sputum induction methods: ultrasonic vs compressed-air nebulizer and hypertonic vs isotonic saline inhalation.

    Science.gov (United States)

    Loh, L C; Eg, K P; Puspanathan, P; Tang, S P; Yip, K S; Vijayasingham, P; Thayaparan, T; Kumar, S

    2004-03-01

    Airway inflammation can be demonstrated by the modem method of sputum induction using ultrasonic nebulizer and hypertonic saline. We studied whether compressed-air nebulizer and isotonic saline which are commonly available and cost less, are as effective in inducing sputum in normal adult subjects as the above mentioned tools. Sixteen subjects underwent weekly sputum induction in the following manner: ultrasonic nebulizer (Medix Sonix 2000, Clement Clarke, UK) using hypertonic saline, ultrasonic nebulizer using isotonic saline, compressed-air nebulizer (BestNeb, Taiwan) using hypertonic saline, and compressed-air nebulizer using isotonic saline. Overall, the use of an ultrasonic nebulizer and hypertonic saline yielded significantly higher total sputum cell counts and a higher percentage of cell viability than compressed-air nebulizers and isotonic saline. With the latter, there was a trend towards squamous cell contaminations. The proportion of various sputum cell types was not significantly different between the groups, and the reproducibility in sputum macrophages and neutrophils was high (Intraclass correlation coefficient, r [95%CI]: 0.65 [0.30-0.91] and 0.58 [0.22-0.89], p compressed-air nebulizers and isotonic saline. We conclude that in normal subjects, although both nebulizers and saline types can induce sputum with reproducible cellular profile, ultrasonic nebulizers and hypertonic saline are more effective but less well tolerated.

  4. Diffuse-Interface Capturing Methods for Compressible Two-Phase Flows

    Science.gov (United States)

    Saurel, Richard; Pantano, Carlos

    2018-01-01

    Simulation of compressible flows became a routine activity with the appearance of shock-/contact-capturing methods. These methods can determine all waves, particularly discontinuous ones. However, additional difficulties may appear in two-phase and multimaterial flows due to the abrupt variation of thermodynamic properties across the interfacial region, with discontinuous thermodynamical representations at the interfaces. To overcome this difficulty, researchers have developed augmented systems of governing equations to extend the capturing strategy. These extended systems, reviewed here, are termed diffuse-interface models, because they are designed to compute flow variables correctly in numerically diffused zones surrounding interfaces. In particular, they facilitate coupling the dynamics on both sides of the (diffuse) interfaces and tend to the proper pure fluid-governing equations far from the interfaces. This strategy has become efficient for contact interfaces separating fluids that are governed by different equations of state, in the presence or absence of capillary effects, and with phase change. More sophisticated materials than fluids (e.g., elastic-plastic materials) have been considered as well.

  5. The research of optimal selection method for wavelet packet basis in compressing the vibration signal of a rolling bearing in fans and pumps

    International Nuclear Information System (INIS)

    Hao, W; Jinji, G

    2012-01-01

    Compressing the vibration signal of a rolling bearing has important significance to wireless monitoring and remote diagnosis of fans and pumps which is widely used in the petrochemical industry. In this paper, according to the characteristics of the vibration signal in a rolling bearing, a compression method based on the optimal selection of wavelet packet basis is proposed. We analyze several main attributes of wavelet packet basis and the effect to the compression of the vibration signal in a rolling bearing using wavelet packet transform in various compression ratios, and proposed a method to precisely select a wavelet packet basis. Through an actual signal, we come to the conclusion that an orthogonal wavelet packet basis with low vanishing moment should be used to compress the vibration signal of a rolling bearing to get an accurate energy proportion between the feature bands in the spectrum of reconstructing the signal. Within these low vanishing moments, orthogonal wavelet packet basis, and 'coif' wavelet packet basis can obtain the best signal-to-noise ratio in the same compression ratio for its best symmetry.

  6. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  7. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  8. Limiting density ratios in piston-driven compressions

    International Nuclear Information System (INIS)

    Lee, S.

    1985-07-01

    By using global energy and pressure balance applied to a shock model it is shown that for a piston-driven fast compression, the maximum compression ratio is not dependent on the absolute magnitude of the piston power, but rather on the power pulse shape. Specific cases are considered and a maximum density compression ratio of 27 is obtained for a square-pulse power compressing a spherical pellet with specific heat ratio of 5/3. Double pulsing enhances the density compression ratio to 1750 in the case of linearly rising compression pulses. Using this method further enhancement by multiple pulsing becomes obvious. (author)

  9. Data compression of digital X-ray images from a clinical viewpoint

    International Nuclear Information System (INIS)

    Ando, Yutaka

    1992-01-01

    For the PACS (picture archiving and communication system), large storage capacity recording media and a fast data transfer network are necessary. When the PACS are working, these technology requirements become an large problem. So we need image data compression having a higher recording efficiency media and an improved transmission ratio. There are two kinds of data compression methods, one is reversible compression and other is the irreversible one. By these reversible compression methods, a compressed-expanded image is exactly equal to the original image. The ratio of data compression is about between 1/2 an d1/3. On the other hand, for irreversible data compression, the compressed-expanded image is a distorted image, and we can achieve a high compression ratio by using this method. In the medical field, the discrete cosine transform (DCT) method is popular because of the low distortion and fast performance. The ratio of data compression is actually from 1/10 to 1/20. It is important for us to decide the compression ratio according to the purposes and modality of the image. We must carefully select the ratio of the data compression because the suitable compression ratio alters in the usage of image for education, clinical diagnosis and reference. (author)

  10. Methods for compressible fluid simulation on GPUs using high-order finite differences

    Science.gov (United States)

    Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer

    2017-08-01

    We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.

  11. Lossy transmission line model of hydrofractured well dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Patzek, T.W. [Department of Materials Science and Mineral Engineering, University of California at Berkeley, Berkeley, CA (United States); De, A. [Earth Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2000-01-01

    The real-time detection of hydrofracture growth is crucial to the successful operation of water, CO{sub 2} or steam injection wells in low-permeability reservoirs and to the prevention of subsidence and well failure. In this paper, we describe propagation of very low frequency (1-10 to 100 Hz) Stoneley waves in a fluid-filled wellbore and their interactions with the fundamental wave mode in a vertical hydrofracture. We demonstrate that Stoneley-wave loses energy to the fracture and the energy transfer from the wellbore to the fracture opening is most efficient in soft rocks. We conclude that placing the wave source and receivers beneath the injection packer provides the most efficient means of hydrofracture monitoring. We then present the lossy transmission line model of wellbore and fracture for the detection and characterization of fracture state and volume. We show that this model captures the wellbore and fracture geometry, the physical properties of injected fluid and the wellbore-fracture system dynamics. The model is then compared with experimentally measured well responses. The simulated responses are in good agreement with published experimental data from several water injection wells with depths ranging from 1000 ft to 9000 ft. Hence, we conclude that the transmission line model of water injectors adequately captures wellbore and fracture dynamics. Using an extensive data set for the South Belridge Diatomite waterfloods, we demonstrate that even for very shallow wells the fracture size and state can be adequately recognized at wellhead. Finally, we simulate the effects of hydrofracture extension on the transient response to a pulse signal generated at wellhead. We show that hydrofracture extensions can indeed be detected by monitoring the wellhead pressure at sufficiently low frequencies.

  12. Exploiting of the Compression Methods for Reconstruction of the Antenna Far-Field Using Only Amplitude Near-Field Measurements

    Directory of Open Access Journals (Sweden)

    J. Puskely

    2010-06-01

    Full Text Available The novel approach exploits the principle of the conventional two-plane amplitude measurements for the reconstruction of the unknown electric field distribution on the antenna aperture. The method combines a global optimization with a compression method. The global optimization method (GO is used to minimize the functional, and the compression method is used to reduce the number of unknown variables. The algorithm employs the Real Coded Genetic Algorithm (RCGA as the global optimization approach. The Discrete Cosine Transform (DCT and the Discrete Wavelet Transform (DWT are applied to reduce the number of unknown variables. Pros and cons of methods are investigated and reported for the solution of the problem. In order to make the algorithm faster, exploitation of amplitudes from a single scanning plane is also discussed. First, the algorithm is used to obtain an initial estimate. Subsequently, the common Fourier iterative algorithm is used to reach global minima with sufficient accuracy. The method is examined measuring the dish antenna.

  13. Indium tin oxide refractometer in the visible and near infrared via lossy mode and surface plasmon resonances with Kretschmann configuration

    International Nuclear Information System (INIS)

    Torres, V.; Beruete, M.; Sánchez, P.; Del Villar, I.

    2016-01-01

    An indium tin oxide (ITO) refractometer based on the generation of lossy mode resonances (LMRs) and surface plasmon resonances (SPRs) is presented. Both LMRs and SPRs are excited, in a single setup, under grazing angle incidence with Kretschmann configuration in an ITO thin-film deposited on a glass slide. The sensing capabilities of the device are demonstrated using several solutions of glycerin and water with refractive indices ranging from 1.33 to 1.47. LMRs are excited in the visible range, from 617 nm to 682 nm under TE polarization and from 533 nm to 637 nm under TM polarization, with a maximum sensitivity of 700 nm/RIU and 1200 nm/RIU, respectively. For the SPRs, a sensing range between 1375 nm and 2494 nm with a maximum sensitivity of 8300 nm/RIU is measured under TM polarization. Experimental results are supported with numerical simulations based on a modification of the plane-wave method for a one-dimensional multilayer waveguide

  14. Indium tin oxide refractometer in the visible and near infrared via lossy mode and surface plasmon resonances with Kretschmann configuration

    Energy Technology Data Exchange (ETDEWEB)

    Torres, V. [Antenna Group–TERALAB, Public University of Navarra, 31006 Pamplona (Spain); Beruete, M. [Antenna Group–TERALAB, Public University of Navarra, 31006 Pamplona (Spain); Institute of Smart Cities, Public University of Navarra, 31006 Pamplona (Spain); Sánchez, P. [Department of Electric and Electronic Engineering, Public University of Navarra, Pamplona 31006 (Spain); Del Villar, I. [Institute of Smart Cities, Public University of Navarra, 31006 Pamplona (Spain); Department of Electric and Electronic Engineering, Public University of Navarra, Pamplona 31006 (Spain)

    2016-01-25

    An indium tin oxide (ITO) refractometer based on the generation of lossy mode resonances (LMRs) and surface plasmon resonances (SPRs) is presented. Both LMRs and SPRs are excited, in a single setup, under grazing angle incidence with Kretschmann configuration in an ITO thin-film deposited on a glass slide. The sensing capabilities of the device are demonstrated using several solutions of glycerin and water with refractive indices ranging from 1.33 to 1.47. LMRs are excited in the visible range, from 617 nm to 682 nm under TE polarization and from 533 nm to 637 nm under TM polarization, with a maximum sensitivity of 700 nm/RIU and 1200 nm/RIU, respectively. For the SPRs, a sensing range between 1375 nm and 2494 nm with a maximum sensitivity of 8300 nm/RIU is measured under TM polarization. Experimental results are supported with numerical simulations based on a modification of the plane-wave method for a one-dimensional multilayer waveguide.

  15. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  16. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  17. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  18. A multiscale method for compressible liquid-vapor flow with surface tension*

    Directory of Open Access Journals (Sweden)

    Jaegle Felix

    2013-01-01

    Full Text Available Discontinuous Galerkin methods have become a powerful tool for approximating the solution of compressible flow problems. Their direct use for two-phase flow problems with phase transformation is not straightforward because this type of flows requires a detailed tracking of the phase front. We consider the fronts in this contribution as sharp interfaces and propose a novel multiscale approach. It combines an efficient high-order Discontinuous Galerkin solver for the computation in the bulk phases on the macro-scale with the use of a generalized Riemann solver on the micro-scale. The Riemann solver takes into account the effects of moderate surface tension via the curvature of the sharp interface as well as phase transformation. First numerical experiments in three space dimensions underline the overall performance of the method.

  19. Augmented Lagrangian Method and Compressible Visco-plastic Flows: Applications to Shallow Dense Avalanches

    Science.gov (United States)

    Bresch, D.; Fernández-Nieto, E. D.; Ionescu, I. R.; Vigneaux, P.

    In this paper we propose a well-balanced finite volume/augmented Lagrangian method for compressible visco-plastic models focusing on a compressible Bingham type system with applications to dense avalanches. For the sake of completeness we also present a method showing that such a system may be derived for a shallow flow of a rigid-viscoplastic incompressible fluid, namely for incompressible Bingham type fluid with free surface. When the fluid is relatively shallow and spreads slowly, lubrication-style asymptotic approximations can be used to build reduced models for the spreading dynamics, see for instance [N.J. Balmforth et al., J. Fluid Mech (2002)]. When the motion is a little bit quicker, shallow water theory for non-Newtonian flows may be applied, for instance assuming a Navier type boundary condition at the bottom. We start from the variational inequality for an incompressible Bingham fluid and derive a shallow water type system. In the case where Bingham number and viscosity are set to zero we obtain the classical Shallow Water or Saint-Venant equations obtained for instance in [J.F. Gerbeau, B. Perthame, DCDS (2001)]. For numerical purposes, we focus on the one-dimensional in space model: We study associated static solutions with sufficient conditions that relate the slope of the bottom with the Bingham number and domain dimensions. We also propose a well-balanced finite volume/augmented Lagrangian method. It combines well-balanced finite volume schemes for spatial discretization with the augmented Lagrangian method to treat the associated optimization problem. Finally, we present various numerical tests.

  20. Studies on improvement of diagnostic ability of computed tomography (CT) in the parenchymatous organs in the upper abdomen, 1. Study on the upper abdominal compression method

    Energy Technology Data Exchange (ETDEWEB)

    Kawata, Ryo [Gifu Univ. (Japan). Faculty of Medicine

    1982-07-01

    1) The upper abdominal compression method was easily applicable for CT examination in practically all the patients. It gave no harm and considerably improved CT diagnosis. 2) The materials used for compression were foamed polystyrene, the Mix-Dp and a water bag. When CT examination was performed to diagnose such lesions as a circumscribed tumor, compression with the Mix-Dp was most useful, and when it was performed for screening examination of upper abdominal diseases, compression with a water bag was most effective. 3) Improvement in contour-depicting ability of CT by the compression method was most marked at the body of the pancreas, followed by the head of the pancreas and the posterior surface of the left lobe of the liver. Slight improvement was seen also at the tail of the pancreas and the left adrenal gland. 4) Improvement in organ-depicting ability of CT by the compression method was estimated by a 4-category classification method. It was found that the improvement was most marked at the body and the head of the pancreas. Considerable improvement was observed also at the left lobe of the liver and the both adrenal glands. Little improvement was obtained at the spleen. When contrast enhancement was combined with the compression method, improvement at such organs which were liable to be enhanced, as the liver and the adrenal glands, was promoted, while the organ-depicting ability was decreased at the pancreas. 5) By comparing the CT image under compression with that without compression, continuous infiltrations of gastric cancer into the body and the tail of the pancreas in 2 cases and a retroperitoneal infiltration of pancreatic tumor in 1 case were diagnosed preoperatively.

  1. Reaction kinetics, reaction products and compressive strength of ternary activators activated slag designed by Taguchi method

    NARCIS (Netherlands)

    Yuan, B.; Yu, Q.L.; Brouwers, H.J.H.

    2015-01-01

    This study investigates the reaction kinetics, the reaction products and the compressive strength of slag activated by ternary activators, namely waterglass, sodium hydroxide and sodium carbonate. Nine mixtures are designed by the Taguchi method considering the factors of sodium carbonate content

  2. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  3. Subband/Transform MATLAB Functions For Processing Images

    Science.gov (United States)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  4. Gaussian multiscale aggregation applied to segmentation in hand biometrics.

    Science.gov (United States)

    de Santos Sierra, Alberto; Avila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  5. Macron Formed Liner Compression as a Practical Method for Enabling Magneto-Inertial Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Slough, John

    2011-12-10

    The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. The main impediment for current nuclear fusion concepts is the complexity and large mass associated with the confinement systems. To take advantage of the smaller scale, higher density regime of magnetic fusion, an efficient method for achieving the compressional heating required to reach fusion gain conditions must be found. The very compact, high energy density plasmoid commonly referred to as a Field Reversed Configuration (FRC) provides for an ideal target for this purpose. To make fusion with the FRC practical, an efficient method for repetitively compressing the FRC to fusion gain conditions is required. A novel approach to be explored in this endeavor is to remotely launch a converging array of small macro-particles (macrons) that merge and form a more massive liner inside the reactor which then radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target FRC plasmoid suppresses the thermal transport to the confining liner significantly lowering the imploding power needed to compress the target. With the momentum flux being delivered by an assemblage of low mass, but high velocity macrons, many of the difficulties encountered with the liner implosion power technology are eliminated. The undertaking to be described in this proposal is to evaluate the feasibility achieving fusion conditions from this simple and low cost approach to fusion. During phase I the design and testing of the key components for the creation of the macron formed liner have been successfully carried out. Detailed numerical calculations of the merging, formation and radial implosion of the Macron Formed Liner (MFL) were also performed. The phase II effort will focus on an experimental demonstration of the macron launcher at full power, and the demonstration

  6. Diagnostic methods and interpretation of the experiments on microtarget compression in the Iskra-4 device

    International Nuclear Information System (INIS)

    Kochemasov, G.G.

    1992-01-01

    Studies on the problem of laser fusion, which is mainly based on experiments conducted in the Iskra-4 device are reviewed. Different approaches to solution of the problem of DT-fuel ignition, methods of diagnostics of characteristics of laser radiation and plasma, occurring on microtarget heating and compression, are considered

  7. A compressed sensing based method with support refinement for impulse noise cancelation in DSL

    KAUST Repository

    Quadeer, Ahmed Abdul

    2013-06-01

    This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.

  8. Analysis and development of adjoint-based h-adaptive direct discontinuous Galerkin method for the compressible Navier-Stokes equations

    Science.gov (United States)

    Cheng, Jian; Yue, Huiqiang; Yu, Shengjiao; Liu, Tiegang

    2018-06-01

    In this paper, an adjoint-based high-order h-adaptive direct discontinuous Galerkin method is developed and analyzed for the two dimensional steady state compressible Navier-Stokes equations. Particular emphasis is devoted to the analysis of the adjoint consistency for three different direct discontinuous Galerkin discretizations: including the original direct discontinuous Galerkin method (DDG), the direct discontinuous Galerkin method with interface correction (DDG(IC)) and the symmetric direct discontinuous Galerkin method (SDDG). Theoretical analysis shows the extra interface correction term adopted in the DDG(IC) method and the SDDG method plays a key role in preserving the adjoint consistency. To be specific, for the model problem considered in this work, we prove that the original DDG method is not adjoint consistent, while the DDG(IC) method and the SDDG method can be adjoint consistent with appropriate treatment of boundary conditions and correct modifications towards the underlying output functionals. The performance of those three DDG methods is carefully investigated and evaluated through typical test cases. Based on the theoretical analysis, an adjoint-based h-adaptive DDG(IC) method is further developed and evaluated, numerical experiment shows its potential in the applications of adjoint-based adaptation for simulating compressible flows.

  9. Antennas in inhomogeneous media

    CERN Document Server

    Galejs, Janis; Fock, V A; Wait, J R

    2013-01-01

    Antennas in Inhomogeneous Media details the methods of analyzing antennas in such inhomogeneous media. The title covers the complex geometrical configurations along with its variational formulations. The coverage of the text includes various conditions the antennas are subjected to, such as antennas in the interface between two media; antennas in compressible isotropic plasma; and linear antennas in a magnetoionic medium. The selection also covers insulated loops in lossy media; slot antennas with a stratified dielectric or isotropic plasma layers; and cavity-backed slot antennas. The book wil

  10. An Applied Method for Predicting the Load-Carrying Capacity in Compression of Thin-Wall Composite Structures with Impact Damage

    Science.gov (United States)

    Mitrofanov, O.; Pavelko, I.; Varickis, S.; Vagele, A.

    2018-03-01

    The necessity for considering both strength criteria and postbuckling effects in calculating the load-carrying capacity in compression of thin-wall composite structures with impact damage is substantiated. An original applied method ensuring solution of these problems with an accuracy sufficient for practical design tasks is developed. The main advantage of the method is its applicability in terms of computing resources and the set of initial data required. The results of application of the method to solution of the problem of compression of fragments of thin-wall honeycomb panel damaged by impacts of various energies are presented. After a comparison of calculation results with experimental data, a working algorithm for calculating the reduction in the load-carrying capacity of a composite object with impact damage is adopted.

  11. Acceleration methods for multi-physics compressible flow

    Science.gov (United States)

    Peles, Oren; Turkel, Eli

    2018-04-01

    In this work we investigate the Runge-Kutta (RK)/Implicit smoother scheme as a convergence accelerator for complex multi-physics flow problems including turbulent, reactive and also two-phase flows. The flows considered are subsonic, transonic and supersonic flows in complex geometries, and also can be either steady or unsteady flows. All of these problems are considered to be a very stiff. We then introduce an acceleration method for the compressible Navier-Stokes equations. We start with the multigrid method for pure subsonic flow, including reactive flows. We then add the Rossow-Swanson-Turkel RK/Implicit smoother that enables performing all these complex flow simulations with a reasonable CFL number. We next discuss the RK/Implicit smoother for time dependent problem and also for low Mach numbers. The preconditioner includes an intrinsic low Mach number treatment inside the smoother operator. We also develop a modified Roe scheme with a corresponding flux Jacobian matrix. We then give the extension of the method for real gas and reactive flow. Reactive flows are governed by a system of inhomogeneous Navier-Stokes equations with very stiff source terms. The extension of the RK/Implicit smoother requires an approximation of the source term Jacobian. The properties of the Jacobian are very important for the stability of the method. We discuss what the chemical physics theory of chemical kinetics tells about the mathematical properties of the Jacobian matrix. We focus on the implication of the Le-Chatelier's principle on the sign of the diagonal entries of the Jacobian. We present the implementation of the method for turbulent flow. We use a two RANS turbulent model - one equation model - Spalart-Allmaras and a two-equation model - k-ω SST model. The last extension is for two-phase flows with a gas as a main phase and Eulerian representation of a dispersed particles phase (EDP). We present some examples for such flow computations inside a ballistic evaluation

  12. Discovery of Empirical Components by Information Theory

    Science.gov (United States)

    2016-08-10

    simply trying to design the SVD of the measurement matrix. We have developed a generalization of Bregman divergence to unify vector Poisson and...deals with the lossy compression of random sources. Shannon’s famous rate-distortion theorem relates the encoding rate R and the expected

  13. A novel perceptually adaptive image watermarking scheme by ...

    African Journals Online (AJOL)

    Threshold and modification value were selected adaptively for each image block, which improved robustness and transparency. The proposed algorithm was able to withstand a variety of attacks and image processing operations like rotation, cropping, noise addition, resizing, lossy compression and etc. The experimental ...

  14. A modified compressible smoothed particle hydrodynamics method and its application on the numerical simulation of low and high velocity impacts

    International Nuclear Information System (INIS)

    Amanifard, N.; Haghighat Namini, V.

    2012-01-01

    In this study a Modified Compressible Smoothed Particle Hydrodynamics method is introduced which is applicable in problems involving shock wave structures and elastic-plastic deformations of solids. As a matter of fact, algorithm of the method is based on an approach which descritizes the momentum equation into three parts and solves each part separately and calculates their effects on the velocity field and displacement of particles. The most exclusive feature of the method is exactly removing artificial viscosity of the formulations and representing good compatibility with other reasonable numerical methods without any rigorous numerical fractures or tensile instabilities while Modified Compressible Smoothed Particle Hydrodynamics does not use any extra modifications. Two types of problems involving elastic-plastic deformations and shock waves are presented here to demonstrate the capability of Modified Compressible Smoothed Particle Hydrodynamics in simulation of such problems and its ability to capture shock. The problems that are proposed here are low and high velocity impacts between aluminum projectiles and semi infinite aluminum beams. Elastic-perfectly plastic model is chosen for constitutive model of the aluminum and the results of simulations are compared with other reasonable studies in these cases.

  15. Using the Maturity Method in Predicting the Compressive Strength of Vinyl Ester Polymer Concrete at an Early Age

    Directory of Open Access Journals (Sweden)

    Nan Ji Jin

    2017-01-01

    Full Text Available The compressive strength of vinyl ester polymer concrete is predicted using the maturity method. The compressive strength rapidly increased until the curing age of 24 hrs and thereafter slowly increased until the curing age of 72 hrs. As the MMA content increased, the compressive strength decreased. Furthermore, as the curing temperature decreased, compressive strength decreased. For vinyl ester polymer concrete, datum temperature, ranging from −22.5 to −24.6°C, decreased as the MMA content increased. The maturity index equation for cement concrete cannot be applied to polymer concrete and the maturity of vinyl ester polymer concrete can only be estimated through control of the time interval Δt. Thus, this study introduced a suitable scaled-down factor (n for the determination of polymer concrete’s maturity, and a factor of 0.3 was the most suitable. Also, the DR-HILL compressive strength prediction model was determined as applicable to vinyl ester polymer concrete among the dose-response models. For the parameters of the prediction model, applying the parameters by combining all data obtained from the three different amounts of MMA content was deemed acceptable. The study results could be useful for the quality control of vinyl ester polymer concrete and nondestructive prediction of early age strength.

  16. High-speed photographic methods for compression dynamics investigation of laser irradiated shell target

    International Nuclear Information System (INIS)

    Basov, N.G.; Kologrivov, A.A.; Krokhin, O.N.; Rupasov, A.A.; Shikanov, A.S.

    1979-01-01

    Three methods are described for a high-speed diagnostics of compression dynamics of shell targets being spherically laser-heated on the installation ''Kal'mar''. The first method is based on the direct investigation of the space-time evolution of the critical-density region for Nd-laser emission (N sub(e) asymptotically equals 10 21 I/cm 3 ) by means of the streak photography of plasma image in the second-harmonic light. The second method involves investigation of time evolution of the second-harmonic spectral distribution by means of a spectrograph coupled with a streak camera. The use of a special laser pulse with two time-distributed intensity maxima for the irradiation of shell targets, and the analysis of the obtained X-ray pin-hole pictures constitute the basis of the third method. (author)

  17. Investigation of Surface Pre-Treatment Methods for Wafer-Level Cu-Cu Thermo-Compression Bonding

    Directory of Open Access Journals (Sweden)

    Koki Tanaka

    2016-12-01

    Full Text Available To increase the yield of the wafer-level Cu-Cu thermo-compression bonding method, certain surface pre-treatment methods for Cu are studied which can be exposed to the atmosphere before bonding. To inhibit re-oxidation under atmospheric conditions, the reduced pure Cu surface is treated by H2/Ar plasma, NH3 plasma and thiol solution, respectively, and is covered by Cu hydride, Cu nitride and a self-assembled monolayer (SAM accordingly. A pair of the treated wafers is then bonded by the thermo-compression bonding method, and evaluated by the tensile test. Results show that the bond strengths of the wafers treated by NH3 plasma and SAM are not sufficient due to the remaining surface protection layers such as Cu nitride and SAMs resulting from the pre-treatment. In contrast, the H2/Ar plasma–treated wafer showed the same strength as the one with formic acid vapor treatment, even when exposed to the atmosphere for 30 min. In the thermal desorption spectroscopy (TDS measurement of the H2/Ar plasma–treated Cu sample, the total number of the detected H2 was 3.1 times more than the citric acid–treated one. Results of the TDS measurement indicate that the modified Cu surface is terminated by chemisorbed hydrogen atoms, which leads to high bonding strength.

  18. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  19. The Effects of Design Strength, Fly Ash Content and Curing Method on Compressive Strength of High Volume Fly Ash Concrete: A Design of Experimental

    Directory of Open Access Journals (Sweden)

    Solikin Mochamad

    2017-01-01

    Full Text Available High volume fly ash concrete becomes one of alternatives to produce green concrete as it uses waste material and significantly reduces the utilization of Portland cement in concrete production. Although using less cement, its compressive strength is comparable to ordinary Portland cement (hereafter OPC and the its durability increases significantly. This paper reports investigation on the effect of design strength, fly ash content and curing method on compressive strength of High Volume Fly Ash Concrete. The experiment and data analysis were prepared using minitab, a statistic software for design of experimental. The specimens were concrete cylinder with diameter of 15 cm and height of 30 cm, tested for its compressive strength at 56 days. The result of the research demonstrates that high volume fly ash concrete can produce comparable compressive strength which meets the strength of OPC design strength especially for high strength concrete. In addition, the best mix proportion to achieve the design strength is the combination of high strength concrete and 50% content of fly ash. Moreover, the use of spraying method for curing method of concrete on site is still recommended as it would not significantly reduce the compressive strength result.

  20. Efficient solution of the non-linear Reynolds equation for compressible fluid using the finite element method

    DEFF Research Database (Denmark)

    Larsen, Jon Steffen; Santos, Ilmar

    2015-01-01

    An efficient finite element scheme for solving the non-linear Reynolds equation for compressible fluid coupled to compliant structures is presented. The method is general and fast and can be used in the analysis of airfoil bearings with simplified or complex foil structure models. To illustrate...

  1. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  2. Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yang Jun

    2016-02-01

    Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.

  3. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  4. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  5. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  6. Tailoring properties of lossy-mode resonance optical fiber sensors with atomic layer deposition technique

    Science.gov (United States)

    Kosiel, Kamil; Koba, Marcin; Masiewicz, Marcin; Śmietana, Mateusz

    2018-06-01

    The paper shows application of atomic layer deposition (ALD) technique as a tool for tailoring sensorial properties of lossy-mode-resonance (LMR)-based optical fiber sensors. Hafnium dioxide (HfO2), zirconium dioxide (ZrO2), and tantalum oxide (TaxOy), as high-refractive-index dielectrics that are particularly convenient for LMR-sensor fabrication, were deposited by low-temperature (100 °C) ALD ensuring safe conditions for thermally vulnerable fibers. Applicability of HfO2 and ZrO2 overlays, deposited with ALD-related atomic level thickness accuracy for fabrication of LMR-sensors with controlled sensorial properties was presented. Additionally, for the first time according to our best knowledge, the double-layer overlay composed of two different materials - silicon nitride (SixNy) and TaxOy - is presented for the LMR fiber sensors. The thin films of such overlay were deposited by two different techniques - PECVD (the SixNy) and ALD (the TaxOy). Such approach ensures fast overlay fabrication and at the same time facility for resonant wavelength tuning, yielding devices with satisfactory sensorial properties.

  7. Data Collection Method for Mobile Control Sink Node in Wireless Sensor Network Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Ling Yongfa

    2016-01-01

    Full Text Available The paper proposes a mobile control sink node data collection method in the wireless sensor network based on compressive sensing. This method, with regular track, selects the optimal data collection points in the monitoring area via the disc method, calcu-lates the shortest path by using the quantum genetic algorithm, and hence determines the data collection route. Simulation results show that this method has higher network throughput and better energy efficiency, capable of collecting a huge amount of data with balanced energy consumption in the network.

  8. Three-dimensional numerical simulation for plastic injection-compression molding

    Science.gov (United States)

    Zhang, Yun; Yu, Wenjie; Liang, Junjie; Lang, Jianlin; Li, Dequn

    2018-03-01

    Compared with conventional injection molding, injection-compression molding can mold optical parts with higher precision and lower flow residual stress. However, the melt flow process in a closed cavity becomes more complex because of the moving cavity boundary during compression and the nonlinear problems caused by non-Newtonian polymer melt. In this study, a 3D simulation method was developed for injection-compression molding. In this method, arbitrary Lagrangian- Eulerian was introduced to model the moving-boundary flow problem in the compression stage. The non-Newtonian characteristics and compressibility of the polymer melt were considered. The melt flow and pressure distribution in the cavity were investigated by using the proposed simulation method and compared with those of injection molding. Results reveal that the fountain flow effect becomes significant when the cavity thickness increases during compression. The back flow also plays an important role in the flow pattern and redistribution of cavity pressure. The discrepancy in pressures at different points along the flow path is complicated rather than monotonically decreased in injection molding.

  9. Optimization of the segmented method for optical compression and multiplexing system

    Science.gov (United States)

    Al Falou, Ayman

    2002-05-01

    Because of the constant increasing demands of images exchange, and despite the ever increasing bandwidth of the networks, compression and multiplexing of images is becoming inseparable from their generation and display. For high resolution real time motion pictures, electronic performing of compression requires complex and time-consuming processing units. On the contrary, by its inherent bi-dimensional character, coherent optics is well fitted to perform such processes that are basically bi-dimensional data handling in the Fourier domain. Additionally, the main limiting factor that was the maximum frame rate is vanishing because of the recent improvement of spatial light modulator technology. The purpose of this communication is to benefit from recent optical correlation algorithms. The segmented filtering used to store multi-references in a given space bandwidth product optical filter can be applied to networks to compress and multiplex images in a given bandwidth channel.

  10. Spectral Element Method for the Simulation of Unsteady Compressible Flows

    Science.gov (United States)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.

  11. An efficient finite differences method for the computation of compressible, subsonic, unsteady flows past airfoils and panels

    Science.gov (United States)

    Colera, Manuel; Pérez-Saborid, Miguel

    2017-09-01

    A finite differences scheme is proposed in this work to compute in the time domain the compressible, subsonic, unsteady flow past an aerodynamic airfoil using the linearized potential theory. It improves and extends the original method proposed in this journal by Hariharan, Ping and Scott [1] by considering: (i) a non-uniform mesh, (ii) an implicit time integration algorithm, (iii) a vectorized implementation and (iv) the coupled airfoil dynamics and fluid dynamic loads. First, we have formulated the method for cases in which the airfoil motion is given. The scheme has been tested on well known problems in unsteady aerodynamics -such as the response to a sudden change of the angle of attack and to a harmonic motion of the airfoil- and has been proved to be more accurate and efficient than other finite differences and vortex-lattice methods found in the literature. Secondly, we have coupled our method to the equations governing the airfoil dynamics in order to numerically solve problems where the airfoil motion is unknown a priori as happens, for example, in the cases of the flutter and the divergence of a typical section of a wing or of a flexible panel. Apparently, this is the first self-consistent and easy-to-implement numerical analysis in the time domain of the compressible, linearized coupled dynamics of the (generally flexible) airfoil-fluid system carried out in the literature. The results for the particular case of a rigid airfoil show excellent agreement with those reported by other authors, whereas those obtained for the case of a cantilevered flexible airfoil in compressible flow seem to be original or, at least, not well-known.

  12. Standard test method for compressive (crushing) strength of fired whiteware materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2006-01-01

    1.1 This test method covers two test procedures (A and B) for the determination of the compressive strength of fired whiteware materials. 1.2 Procedure A is generally applicable to whiteware products of low- to moderately high-strength levels (up to 150 000 psi or 1030 MPa). 1.3 Procedure B is specifically devised for testing of high-strength ceramics (over 100 000 psi or 690 MPa). 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  13. DELIMINATE--a fast and efficient method for loss-less compression of genomic sequences: sequence analysis.

    Science.gov (United States)

    Mohammed, Monzoorul Haque; Dutta, Anirban; Bose, Tungadri; Chadaram, Sudha; Mande, Sharmila S

    2012-10-01

    An unprecedented quantity of genome sequence data is currently being generated using next-generation sequencing platforms. This has necessitated the development of novel bioinformatics approaches and algorithms that not only facilitate a meaningful analysis of these data but also aid in efficient compression, storage, retrieval and transmission of huge volumes of the generated data. We present a novel compression algorithm (DELIMINATE) that can rapidly compress genomic sequence data in a loss-less fashion. Validation results indicate relatively higher compression efficiency of DELIMINATE when compared with popular general purpose compression algorithms, namely, gzip, bzip2 and lzma. Linux, Windows and Mac implementations (both 32 and 64-bit) of DELIMINATE are freely available for download at: http://metagenomics.atc.tcs.com/compression/DELIMINATE. sharmila@atc.tcs.com Supplementary data are available at Bioinformatics online.

  14. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation

    OpenAIRE

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-01-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects ...

  15. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  16. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    International Nuclear Information System (INIS)

    Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W

    2015-01-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced. (paper)

  17. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    Science.gov (United States)

    Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.

    2015-03-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  18. System using data compression and hashing adapted for use for multimedia encryption

    Science.gov (United States)

    Coffland, Douglas R [Livermore, CA

    2011-07-12

    A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.

  19. Coabsorbent and thermal recovery compression heat pumping technologies

    CERN Document Server

    Staicovici, Mihail-Dan

    2014-01-01

    This book introduces two of the most exciting heat pumping technologies, the coabsorbent and the thermal recovery (mechanical vapor) compression, characterized by a high potential in primary energy savings and environmental protection. New cycles with potential applications of nontruncated, truncated, hybrid truncated, and multi-effect coabsorbent types are introduced in this work.   Thermal-to-work recovery compression (TWRC) is the first of two particular methods explored here, including how superheat is converted into work, which diminishes the compressor work input. In the second method, thermal-to-thermal recovery compression (TTRC), the superheat is converted into useful cooling and/or heating, and added to the cycle output effect via the coabsorbent technology. These and other methods of discharge gas superheat recovery are analyzed for single-, two-, three-, and multi-stage compression cooling and heating, ammonia and ammonia-water cycles, and the effectiveness results are given.  The author presen...

  20. Gaussian Multiscale Aggregation Applied to Segmentation in Hand Biometrics

    Directory of Open Access Journals (Sweden)

    Gonzalo Bailador del Pozo

    2011-11-01

    Full Text Available This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC and Normalized Cuts (NCuts. The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  1. An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali

    2016-06-03

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.

  2. An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi

    2016-01-01

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.

  3. Minimal invasive stabilization of osteoporotic vertebral compression fractures. Methods and preinterventional diagnostics

    International Nuclear Information System (INIS)

    Grohs, J.G.; Krepler, P.

    2004-01-01

    Minimal invasive stabilizations represent a new alternative for the treatment of osteoporotic compression fractures. Vertebroplasty and balloon kyphoplasty are two methods to enhance the strength of osteoporotic vertebral bodies by the means of cement application. Vertebroplasty is the older and technically easier method. The balloon kyphoplasty is the newer and more expensive method which does not only improve pain but also restores the sagittal profile of the spine. By balloon kyphoplasty the height of 101 fractured vertebral bodies could be increased up to 90% and the wedge decreased from 12 to 7 degrees. Pain was reduced from 7,2 to 2,5 points. The Oswestry disability index decreased from 60 to 26 points. This effects persisted over a period of two years. Cement leakage occurred in only 2% of vertebral bodies. Fractures of adjacent vertebral bodies were found in 11%. Good preinterventional diagnostics and intraoperative imaging are necessary to make the balloon kyphoplasty a successful application. (orig.) [de

  4. Bystander fatigue and CPR quality by older bystanders: a randomized crossover trial comparing continuous chest compressions and 30:2 compressions to ventilations.

    Science.gov (United States)

    Liu, Shawn; Vaillancourt, Christian; Kasaboski, Ann; Taljaard, Monica

    2016-11-01

    This study sought to measure bystander fatigue and cardiopulmonary resuscitation (CPR) quality after five minutes of CPR using the continuous chest compression (CCC) versus the 30:2 chest compression to ventilation method in older lay persons, a population most likely to perform CPR on cardiac arrest victims. This randomized crossover trial took place at three tertiary care hospitals and a seniors' center. Participants were aged ≥55 years without significant physical limitations (frailty score ≤3/7). They completed two 5-minute CPR sessions (using 30:2 and CCC) on manikins; sessions were separated by a rest period. We used concealed block randomization to determine CPR method order. Metronome feedback maintained a compression rate of 100/minute. We measured heart rate (HR), mean arterial pressure (MAP), and Borg Exertion Scale. CPR quality measures included total number of compressions and number of adequate compressions (depth ≥5 cm). Sixty-three participants were enrolled: mean age 70.8 years, female 66.7%, past CPR training 60.3%. Bystander fatigue was similar between CPR methods: mean difference in HR -0.59 (95% CI -3.51-2.33), MAP 1.64 (95% CI -0.23-3.50), and Borg 0.46 (95% CI 0.07-0.84). Compared to 30:2, participants using CCC performed more chest compressions (480.0 v. 376.3, mean difference 107.7; pCPR quality decreased significantly faster when performing CCC compared to 30:2. However, performing CCC produced more adequate compressions overall with a similar level of fatigue compared to the 30:2 method.

  5. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  6. Development of modifications to the material point method for the simulation of thin membranes, compressible fluids, and their interactions

    Energy Technology Data Exchange (ETDEWEB)

    York, A.R. II [Sandia National Labs., Albuquerque, NM (United States). Engineering and Process Dept.

    1997-07-01

    The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.

  7. Exploring compression techniques for ROOT IO

    Science.gov (United States)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  8. Heterogeneous Compression of Large Collections of Evolutionary Trees.

    Science.gov (United States)

    Matthews, Suzanne J

    2015-01-01

    Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.

  9. Mammography parameters: compression, dose, and discomfort

    International Nuclear Information System (INIS)

    Blanco, S.; Di Risio, C.; Andisco, D.; Rojas, R.R.; Rojas, R.M.

    2017-01-01

    Objective: To confirm the importance of compression in mammography and relate it to the discomfort expressed by the patients. Materials and methods: Two samples of 402 and 268 mammographies were obtained from two diagnostic centres that use the same mammographic equipment, but different compression techniques. The patient age range was from 21 to 50 years old. (authors) [es

  10. Linearly decoupled energy-stable numerical methods for multi-component two-phase compressible flow

    KAUST Repository

    Kou, Jisheng

    2017-12-06

    In this paper, for the first time we propose two linear, decoupled, energy-stable numerical schemes for multi-component two-phase compressible flow with a realistic equation of state (e.g. Peng-Robinson equation of state). The methods are constructed based on the scalar auxiliary variable (SAV) approaches for Helmholtz free energy and the intermediate velocities that are designed to decouple the tight relationship between velocity and molar densities. The intermediate velocities are also involved in the discrete momentum equation to ensure a consistency relationship with the mass balance equations. Moreover, we propose a component-wise SAV approach for a multi-component fluid, which requires solving a sequence of linear, separate mass balance equations. We prove that the methods have the unconditional energy-dissipation feature. Numerical results are presented to verify the effectiveness of the proposed methods.

  11. Real-time and encryption efficiency improvements of simultaneous fusion, compression and encryption method based on chaotic generators

    Science.gov (United States)

    Jridi, Maher; Alfalou, Ayman

    2018-03-01

    In this paper, enhancement of an existing optical simultaneous fusion, compression and encryption (SFCE) scheme in terms of real-time requirements, bandwidth occupation and encryption robustness is proposed. We have used and approximate form of the DCT to decrease the computational resources. Then, a novel chaos-based encryption algorithm is introduced in order to achieve the confusion and diffusion effects. In the confusion phase, Henon map is used for row and column permutations, where the initial condition is related to the original image. Furthermore, the Skew Tent map is employed to generate another random matrix in order to carry out pixel scrambling. Finally, an adaptation of a classical diffusion process scheme is employed to strengthen security of the cryptosystem against statistical, differential, and chosen plaintext attacks. Analyses of key space, histogram, adjacent pixel correlation, sensitivity, and encryption speed of the encryption scheme are provided, and favorably compared to those of the existing crypto-compression system. The proposed method has been found to be digital/optical implementation-friendly which facilitates the integration of the crypto-compression system on a very broad range of scenarios.

  12. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian; Dutta, Aritra; Sun, Qiyu; Foroosh, Hassan

    2017-01-01

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  13. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian

    2017-05-02

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  14. Applicability of higher-order TVD method to low mach number compressible flows

    International Nuclear Information System (INIS)

    Akamatsu, Mikio

    1995-01-01

    Steep gradients of fluid density are the influential factor of spurious oscillation in numerical solutions of low Mach number (M<<1) compressible flows. The total variation diminishing (TVD) scheme is a promising remedy to overcome this problem and obtain accurate solutions. TVD schemes for high-speed flows are, however, not compatible with commonly used methods in low Mach number flows using pressure-based formulation. In the present study a higher-order TVD scheme is constructed on a modified form of each individual scalar equation of primitive variables. It is thus clarified that the concept of TVD is applicable to low Mach number flows within the framework of the existing numerical method. Results of test problems of the moving interface of two-component gases with the density ratio ≥ 4, demonstrate the accurate and robust (wiggle-free) profile of the scheme. (author)

  15. An efficient compression scheme for bitmap indices

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  16. Investigation of the influence of different surface regularization methods for cylindrical concrete specimens in axial compression tests

    Directory of Open Access Journals (Sweden)

    R. MEDEIROS

    Full Text Available ABSTRACT This study was conducted with the aim of evaluating the influence of different methods for end surface preparation of compressive strength test specimens. Four different methods were compared: a mechanical wear method through grinding using a diamond wheel established by NBR 5738; a mechanical wear method using a diamond saw which is established by NM 77; an unbonded system using neoprene pads in metal retainer rings established by C1231 and a bonded capping method with sulfur mortar established by NBR 5738 and by NM 77. To develop this research, 4 concrete mixes were determined with different strength levels, 2 of group 1 and 2 of group 2 strength levels established by NBR 8953. Group 1 consists of classes C20 to C50, 5 in 5MPa, also known as normal strength concrete. Group 2 is comprised of class C55, C60 to C100, 10 in 10 MPa, also known as high strength concrete. Compression tests were carried out at 7 and 28 days for the 4 surface preparation methods. The results of this study indicate that the method established by NBR 5738 is the most effective among the 4 strengths considered, once it presents lower dispersion of values obtained from the tests, measured by the coefficient of variation and, in almost all cases, it demonstrates the highest mean of rupture test. The method described by NBR 5738 achieved the expected strength level in all tests.

  17. Prevention of deep vein thrombosis in potential neurosurgical patients. A randomized trial comparing graduated compression stockings alone or graduated compression stockings plus intermittent pneumatic compression with control

    International Nuclear Information System (INIS)

    Turpie, A.G.; Hirsh, J.; Gent, M.; Julian, D.; Johnson, J.

    1989-01-01

    In a randomized trial of neurosurgical patients, groups wearing graduated compression stockings alone (group 1) or graduated compression stockings plus intermittent pneumatic compression (IPC) (group 2) were compared with an untreated control group in the prevention of deep vein thrombosis (DVT). In both active treatment groups, the graduated compression stockings were continued for 14 days or until hospital discharge, if earlier. In group 2, IPC was continued for seven days. All patients underwent DVT surveillance with iodine 125-labeled fibrinogen leg scanning and impedance plethysmography. Venography was carried out if either test became abnormal. Deep vein thrombosis occurred in seven (8.8%) of 80 patients in group 1, in seven (9.0%) of 78 patients in group 2, and in 16 (19.8%) of 81 patients in the control group. The observed differences among these rates are statistically significant. The results of this study indicate that graduated compression stockings alone or in combination with IPC are effective methods of preventing DVT in neurosurgical patients

  18. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation.

    Science.gov (United States)

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-07-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects were positioned in a 30° lateral recumbent position, and a 2-kgf compression was applied. For expiratory rib cage compression, the rib cage was compressed unilaterally; for expiratory abdominal compression, the area directly above the navel was compressed. Tidal volume values were the actual measured values divided by body weight. [Results] Tidal volume values were as follows: at rest, 7.2 ± 1.7 mL/kg; during expiratory rib cage compression, 8.3 ± 2.1 mL/kg; during expiratory abdominal compression, 9.1 ± 2.2 mL/kg. There was a significant difference between the tidal volume during expiratory abdominal compression and that at rest. The tidal volume in expiratory rib cage compression was strongly correlated with that in expiratory abdominal compression. [Conclusion] These results indicate that expiratory abdominal compression may be an effective alternative to the manual breathing assist procedure.

  19. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  20. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  1. Rotary compression process for producing toothed hollow shafts

    Directory of Open Access Journals (Sweden)

    J. Tomczak

    2014-10-01

    Full Text Available The paper presents the results of numerical analyses of the rotary compression process for hollow stepped shafts with herringbone teeth. The numerical simulations were performed by Finite Element Method (FEM, using commercial software package DEFORM-3D. The results of numerical modelling aimed at determining the effect of billet wall thickness on product shape and the rotary compression process are presented. The distributions of strains, temperatures, damage criterion and force parameters of the process determined in the simulations are given, too. The numerical results obtained confirm the possibility of producing hollow toothed shafts from tube billet by rotary compression methods.

  2. A hybrid data compression approach for online backup service

    Science.gov (United States)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  3. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  4. Integer Set Compression and Statistical Modeling

    DEFF Research Database (Denmark)

    Larsson, N. Jesper

    2014-01-01

    enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities......Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...

  5. Lossless Geometry Compression Through Changing 3D Coordinates into 1D

    Directory of Open Access Journals (Sweden)

    Yongkui Liu

    2013-08-01

    Full Text Available A method of lossless geometry compression on the coordinates of the vertexes for grid model is presented. First, the 3D coordinates are pre-processed to be transformed into a specific form. Then these 3D coordinates are changed into 1D data by making the three coordinates of a vertex represented by only a position number, which is made of a large integer. To minimize the integers, they are sorted and the differences between two adjacent vertexes are stored in a vertex table. In addition to the technique of geometry compression on coordinates, an improved method for storing the compressed topological data in a facet table is proposed to make the method more complete and efficient. The experimental results show that the proposed method has a better compression rate than the latest method of lossless geometry compression, the Isenburg-Lindstrom-Snoeyink method. The theoretical analysis and the experiment results also show that the important decompression time of the new method is short. Though the new method is explained in the case of a triangular grid, it can also be used in other forms of grid model.

  6. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Video steganography based on bit-plane decomposition of wavelet-transformed video

    Science.gov (United States)

    Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji

    2004-06-01

    This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.

  8. Usefulness of injecting local anesthetic before compression in stereotactic vacuum-assisted breast biopsy

    International Nuclear Information System (INIS)

    Matsuura, Akiko; Urashima, Masaki; Nishihara, Reisuke

    2009-01-01

    Stereotactic vacuum-assisted breast biopsy is a useful method of breast biopsy. However, some patients feel unbearable breast pain due to compression. Breast pain due to compression caused the fact that the breast cannot be compressed sufficiently. Sufficient compression is important to fix the breast in this method. Breast pain during this procedure is problematic from the perspectives of both stress and fixing the breast. We performed biopsy in the original manner by injecting local anesthetic before compression, in order to relieve breast pain due to compression. This was only slightly different in order from the standard method, and there was no need for any special technique or device. This way allowed for even higher breast compression, and all of the most recent 30 cases were compressed at levels greater than 120N. This approach is useful not only for relieving pain, but also for fixing the breast. (author)

  9. High-speed and high-ratio referential genome compression.

    Science.gov (United States)

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Efficient two-dimensional compressive sensing in MIMO radar

    Science.gov (United States)

    Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad

    2017-12-01

    Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.

  11. High intensity pulse self-compression in short hollow core capillaries

    OpenAIRE

    Butcher, Thomas J.; Anderson, Patrick N.; Horak, Peter; Frey, Jeremy G.; Brocklesby, William S.

    2011-01-01

    The drive for shorter pulses for use in techniques such as high harmonic generation and laser wakefield acceleration requires continual improvement in post-laser pulse compression techniques. The two most commonly used methods of pulse compression for high intensity pulses are hollow capillary compression via self-phase modulation (SPM) [1] and the more recently developed filamentation [2]. Both of these methods can require propagation distances of 1-3 m to achieve spectral broadening and com...

  12. Experimental Study on the Compressive Strength of Big Mobility Concrete with Nondestructive Testing Method

    Directory of Open Access Journals (Sweden)

    Huai-Shuai Shang

    2012-01-01

    Full Text Available An experimental study of C20, C25, C30, C40, and C50 big mobility concrete cubes that came from laboratory and construction site was completed. Nondestructive testing (NDT was carried out using impact rebound hammer (IRH techniques to establish a correlation between the compressive strengths and the rebound number. The local curve for measuring strength of the regression method is set up and its superiority is proved. The rebound method presented is simple, quick, and reliable and covers wide ranges of concrete strengths. The rebound method can be easily applied to concrete specimens as well as existing concrete structures. The final results were compared with previous ones from the literature and also with actual results obtained from samples extracted from existing structures.

  13. Sizing of Compression Coil Springs Gas Regulators Using Modern Methods CAD and CAE

    Directory of Open Access Journals (Sweden)

    Adelin Ionel Tuţă

    2010-10-01

    Full Text Available This paper presents a method for compression coil springs sizing by gas regulators composition, using CAD techniques (Computer Aided Design and CAE (Computer Aided Engineering. Sizing is to optimize the functioning of the regulators under dynamic industrial and house-hold. Gas regulator is a device that automatically and continuously adjusted to maintain pre-set limits on output gas pressure at varying flow and input pressure. The performances of the pressure regulators like automatic systems depend on their behaviour under dynamic opera-tion. Time constant optimization of pneumatic actuators, which drives gas regulators, leads to a better functioning under their dynamic.

  14. Compression of Short Text on Embedded Systems

    DEFF Research Database (Denmark)

    Rein, S.; Gühmann, C.; Fitzek, Frank

    2006-01-01

    The paper details a scheme for lossless compression of a short data series larger than 50 bytes. The method uses arithmetic coding and context modelling with a low-complexity data model. A data model that takes 32 kBytes of RAM already cuts the data size in half. The compression scheme just takes...

  15. Iterative dictionary construction for compression of large DNA data sets.

    Science.gov (United States)

    Kuruppu, Shanika; Beresford-Smith, Bryan; Conway, Thomas; Zobel, Justin

    2012-01-01

    Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, COMRAD, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. COMRAD compresses the data over multiple passes, which is an expensive process, but allows COMRAD to compress large data sets within reasonable time and space. COMRAD allows for random access to individual sequences and subsequences without decompressing the whole data set. COMRAD has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.

  16. Density ratios in compressions driven by radiation pressure

    International Nuclear Information System (INIS)

    Lee, S.

    1988-01-01

    It has been suggested that in the cannonball scheme of laser compression the pellet may be considered to be compressed by the 'brute force' of the radiation pressure. For such a radiation-driven compression, an energy balance method is applied to give an equation fixing the radius compression ratio K which is a key parameter for such intense compressions. A shock model is used to yield specific results. For a square-pulse driving power compressing a spherical pellet with a specific heat ratio of 5/3, a density compression ratio Γ of 27 is computed. Double (stepped) pulsing with linearly rising power enhances Γ to 1750. The value of Γ is not dependent on the absolute magnitude of the piston power, as long as this is large enough. Further enhancement of compression by multiple (stepped) pulsing becomes obvious. The enhanced compression increases the energy gain factor G for a 100 μm DT pellet driven by radiation power of 10 16 W from 6 for a square pulse power with 0.5 MJ absorbed energy to 90 for a double (stepped) linearly rising pulse with absorbed energy of 0.4 MJ assuming perfect coupling efficiency. (author)

  17. A spectral element-FCT method for the compressible Euler equations

    International Nuclear Information System (INIS)

    Giannakouros, J.; Karniadakis, G.E.

    1994-01-01

    A new algorithm based on spectral element discretizations and flux-corrected transport concepts is developed for the solution of the Euler equations of inviscid compressible fluid flow. A conservative formulation is proposed based on one- and two-dimensional cell-averaging and reconstruction procedures, which employ a staggered mesh of Gauss-Chebyshev and Gauss-Lobatto-Chebyshev collocation points. Particular emphasis is placed on the construction of robust boundary and interfacial conditions in one- and two-dimensions. It is demonstrated through shock-tube problems and two-dimensional simulations that the proposed algorithm leads to stable, non-oscillatory solutions of high accuracy. Of particular importance is the fact that dispersion errors are minimal, as show through experiments. From the operational point of view, casting the method in a spectral element formulation provides flexibility in the discretization, since a variable number of macro-elements or collocation points per element can be employed to accomodate both accuracy and geometric requirements

  18. Simulation of 2-D Compressible Flows on a Moving Curvilinear Mesh with an Implicit-Explicit Runge-Kutta Method

    KAUST Repository

    AbuAlSaud, Moataz

    2012-07-01

    The purpose of this thesis is to solve unsteady two-dimensional compressible Navier-Stokes equations for a moving mesh using implicit explicit (IMEX) Runge- Kutta scheme. The moving mesh is implemented in the equations using Arbitrary Lagrangian Eulerian (ALE) formulation. The inviscid part of the equation is explicitly solved using second-order Godunov method, whereas the viscous part is calculated implicitly. We simulate subsonic compressible flow over static NACA-0012 airfoil at different angle of attacks. Finally, the moving mesh is examined via oscillating the airfoil between angle of attack = 0 and = 20 harmonically. It is observed that the numerical solution matches the experimental and numerical results in the literature to within 20%.

  19. smallWig: parallel compression of RNA-seq WIG files.

    Science.gov (United States)

    Wang, Zhiying; Weissman, Tsachy; Milenkovic, Olgica

    2016-01-15

    We developed a new lossless compression method for WIG data, named smallWig, offering the best known compression rates for RNA-seq data and featuring random access functionalities that enable visualization, summary statistics analysis and fast queries from the compressed files. Our approach results in order of magnitude improvements compared with bigWig and ensures compression rates only a fraction of those produced by cWig. The key features of the smallWig algorithm are statistical data analysis and a combination of source coding methods that ensure high flexibility and make the algorithm suitable for different applications. Furthermore, for general-purpose file compression, the compression rate of smallWig approaches the empirical entropy of the tested WIG data. For compression with random query features, smallWig uses a simple block-based compression scheme that introduces only a minor overhead in the compression rate. For archival or storage space-sensitive applications, the method relies on context mixing techniques that lead to further improvements of the compression rate. Implementations of smallWig can be executed in parallel on different sets of chromosomes using multiple processors, thereby enabling desirable scaling for future transcriptome Big Data platforms. The development of next-generation sequencing technologies has led to a dramatic decrease in the cost of DNA/RNA sequencing and expression profiling. RNA-seq has emerged as an important and inexpensive technology that provides information about whole transcriptomes of various species and organisms, as well as different organs and cellular communities. The vast volume of data generated by RNA-seq experiments has significantly increased data storage costs and communication bandwidth requirements. Current compression tools for RNA-seq data such as bigWig and cWig either use general-purpose compressors (gzip) or suboptimal compression schemes that leave significant room for improvement. To substantiate

  20. Force balancing in mammographic compression

    International Nuclear Information System (INIS)

    Branderhorst, W.; Groot, J. E. de; Lier, M. G. J. T. B. van; Grimbergen, C. A.; Neeter, L. M. F. H.; Heeten, G. J. den; Neeleman, C.

    2016-01-01

    Purpose: In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body during compression. This leads to unnecessary stretching of the skin and other tissues around the breast, which can make the imaging procedure more painful for the patient. The goal of this study was to implement a method to measure and minimize the force imbalance, and to assess its feasibility as an objective and reproducible method of setting the image receptor height. Methods: A trial was conducted consisting of 13 craniocaudal mammographic compressions on a silicone breast phantom, each with the image receptor positioned at a different height. The image receptor height was varied over a range of 12 cm. In each compression, the force exerted by the compression paddle was increased up to 140 N in steps of 10 N. In addition to the paddle force, the authors measured the force exerted by the image receptor and the reaction force exerted on the patient body by the ground. The trial was repeated 8 times, with the phantom remounted at a slightly different orientation and position between the trials. Results: For a given paddle force, the obtained results showed that there is always exactly one image receptor height that leads to a balance of the forces on the breast. For the breast phantom, deviating from this specific height increased the force imbalance by 9.4 ± 1.9 N/cm (6.7%) for 140 N paddle force, and by 7.1 ± 1.6 N/cm (17.8%) for 40 N paddle force. The results also show that in situations where the force exerted by the image receptor is not measured, the craniocaudal force imbalance can still be determined by positioning the patient on a weighing scale and observing the changes in displayed weight during the procedure. Conclusions: In mammographic breast

  1. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    Science.gov (United States)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  2. Bit-wise arithmetic coding for data compression

    Science.gov (United States)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  3. Theoretical Analysis of Moving Reference Planes Associated with Unit Cells of Nonreciprocal Lossy Periodic Transmission-Line Structures

    Directory of Open Access Journals (Sweden)

    S. Lamultree

    2017-04-01

    Full Text Available This paper presents a theoretical analysis of moving reference planes associated with unit cells of nonreciprocal lossy periodic transmission-line structures (NRLSPTLSs by the equivalent bi-characteristic-impedance transmission line (BCITL model. Applying the BCITL theory, only the equivalent BCITL parameters (characteristic impedances for waves propagating in forward and reverse directions and associated complex propagation constants are of interest. An infinite NRLSPTLS is considered first by shifting a reference position of unit cells along TLs of interest. Then, a semi-infinite terminated NRLSPTLS is investigated in terms of associated load reflection coefficients. It is found that the equivalent BCITL characteristic impedances of the original and shifted unit cells are mathematically related by the bilinear transformation. In addition, the associated load reflection coefficients of both unit cells are mathematically related by the bilinear transformation. However, the equivalent BCITL complex propagation constants remain unchanged. Numerical results are provided to show the validity of the proposed theoretical analysis.

  4. Hardware compression using common portions of data

    Science.gov (United States)

    Chang, Jichuan; Viswanathan, Krishnamurthy

    2015-03-24

    Methods and devices are provided for data compression. Data compression can include receiving a plurality of data chunks, sampling at least some of the plurality of data chunks extracting a common portion from a number of the plurality of data chunks based on the sampling, and storing a remainder of the plurality of data chunks in memory.

  5. Compressive buckling of black phosphorene nanotubes: an atomistic study

    Science.gov (United States)

    Nguyen, Van-Trang; Le, Minh-Quy

    2018-04-01

    We investigate through molecular dynamics finite element method with Stillinger-Weber potential the uniaxial compression of armchair and zigzag black phosphorene nanotubes. We focus especially on the effects of the tube’s diameter with fixed length-diameter ratio, effects of the tube’s length for a pair of armchair and zigzag tubes of equal diameters, and effects of the tube’s diameter with fixed lengths. Their Young’s modulus, critical compressive stress and critical compressive strain are studied and discussed for these 3 case studies. Compressive buckling was clearly observed in the armchair nanotubes. Local bond breaking near the boundary occurred in the zigzag ones under compression.

  6. An efficient algorithm for MR image reconstruction and compression

    International Nuclear Information System (INIS)

    Wang, Hang; Rosenfeld, D.; Braun, M.; Yan, Hong

    1992-01-01

    In magnetic resonance imaging (MRI), the original data are sampled in the spatial frequency domain. The sampled data thus constitute a set of discrete Fourier transform (DFT) coefficients. The image is usually reconstructed by taking inverse DFT. The image data may then be efficiently compressed using the discrete cosine transform (DCT). A method of using DCT to treat the sampled data is presented which combines two procedures, image reconstruction and data compression. This method may be particularly useful in medical picture archiving and communication systems where both image reconstruction and compression are important issues. 11 refs., 3 figs

  7. A novel high-frequency encoding algorithm for image compression

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  8. Application aware approach to compression and transmission of H.264 encoded video for automated and centralized transportation surveillance.

    Science.gov (United States)

    2012-10-01

    In this report we present a transportation video coding and wireless transmission system specically tailored to automated : vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channe...

  9. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2009-01-01

    This book introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. For the computation of turbulent compressible flows, current methods of averaging and filtering are presented so that the reader is exposed to a consistent development of applicable equation sets for both the mean or resolved fields as well as the transport equations for the turbulent stress field. For the measurement of turbulent compressible flows, current techniques ranging from hot-wire anemometry to PIV are evaluated and limitations assessed. Characterizing dynamic features of free shear flows, including jets, mixing layers and wakes, and wall-bounded flows, including shock-turbulence and shock boundary-layer interactions, obtained from computations, experiments and simulations are discussed. Key features: * Describes prediction methodologies in...

  10. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  11. Compression and fast retrieval of SNP data.

    Science.gov (United States)

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. An analysis of the efficacy of bag-valve-mask ventilation and chest compression during different compression-ventilation ratios in manikin-simulated paediatric resuscitation.

    Science.gov (United States)

    Kinney, S B; Tibballs, J

    2000-01-01

    The ideal chest compression and ventilation ratio for children during performance of cardiopulmonary resuscitation (CPR) has not been determined. The efficacy of chest compression and ventilation during compression ventilation ratios of 5:1, 10:2 and 15:2 was examined. Eighteen nurses, working in pairs, were instructed to provide chest compression and bag-valve-mask ventilation for 1 min with each ratio in random on a child-sized manikin. The subjects had been previously taught paediatric CPR within the last 3 or 5 months. The efficacy of ventilation was assessed by measurement of the expired tidal volume and the number of breaths provided. The rate of chest compression was guided by a metronome set at 100/min. The efficacy of chest compressions was assessed by measurement of the rate and depth of compression. There was no significant difference in the mean tidal volume or the percentage of effective chest compressions delivered for each compression-ventilation ratio. The number of breaths delivered was greatest with the ratio of 5:1. The percentage of effective chest compressions was equal with all three methods but the number of effective chest compressions was greatest with a ratio of 5:1. This study supports the use of a compression-ventilation ratio of 5:1 during two-rescuer paediatric cardiopulmonary resuscitation.

  13. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  14. Compressive Sampling of EEG Signals with Finite Rate of Innovation

    Directory of Open Access Journals (Sweden)

    Poh Kok-Kiong

    2010-01-01

    Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.

  15. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  16. Compressed Air Production Using Vehicle Suspension

    Directory of Open Access Journals (Sweden)

    Ninad Arun Malpure

    2015-08-01

    Full Text Available Abstract Generally compressed air is produced using different types of air compressors which consumes lot of electric energy and is noisy. In this paper an innovative idea is put forth for production of compressed air using movement of vehicle suspension which normal is wasted. The conversion of the force energy into the compressed air is carried out by the mechanism which consists of the vehicle suspension system hydraulic cylinder Non-return valve air compressor and air receiver. We are collecting air in the cylinder and store this energy into the tank by simply driving the vehicle. This method is non-conventional as no fuel input is required and is least polluting.

  17. Packet Header Compression for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Pekka KOSKELA

    2016-01-01

    Full Text Available Due to the extensive growth of Internet of Things (IoT, the number of wireless devices connected to the Internet is forecasted to grow to 26 billion units installed in 2020. This will challenge both the energy efficiency of wireless battery powered devices and the bandwidth of wireless networks. One solution for both challenges could be to utilize packet header compression. This paper reviews different packet compression, and especially packet header compression, methods and studies the performance of Robust Header Compression (ROHC in low speed radio networks such as XBEE, and in high speed radio networks such as LTE and WLAN. In all networks, the compressing and decompressing processing causes extra delay and power consumption, but in low speed networks, energy can still be saved due to the shorter transmission time.

  18. Superconductivity under uniaxial compression in β-(BDA-TTP) salts

    International Nuclear Information System (INIS)

    Suzuki, T.; Onari, S.; Ito, H.; Tanaka, Y.

    2009-01-01

    In order to clarify the mechanism of organic superconductor β-(BDA-TTP) salts. We study the superconductivity under uniaxial compression with non-dimerized two-band Hubbard model. We have calculated the uniaxial compression dependence of T c by solving the Eliashberg's equation using the fluctuation exchange (FLEX) approximation. The transfer integral under the uniaxial compression was estimated by the extended Huckel method. We have found that non-monotonic behaviors of T c in experimental results under uniaxial compression are understood taking the spin frustration and spin fluctuation into account.

  19. Superconductivity under uniaxial compression in β-(BDA-TTP) salts

    Science.gov (United States)

    Suzuki, T.; Onari, S.; Ito, H.; Tanaka, Y.

    2009-10-01

    In order to clarify the mechanism of organic superconductor β-(BDA-TTP) salts. We study the superconductivity under uniaxial compression with non-dimerized two-band Hubbard model. We have calculated the uniaxial compression dependence of T c by solving the Eliashberg’s equation using the fluctuation exchange (FLEX) approximation. The transfer integral under the uniaxial compression was estimated by the extended Huckel method. We have found that non-monotonic behaviors of T c in experimental results under uniaxial compression are understood taking the spin frustration and spin fluctuation into account.

  20. Superconductivity under uniaxial compression in beta-(BDA-TTP) salts

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, T., E-mail: suzuki@rover.nuap.nagoya-u.ac.j [Department of Applied Physics and JST, TRIP, Nagoya University, Chikusa, Nagoya 464-8603 (Japan); Onari, S.; Ito, H.; Tanaka, Y. [Department of Applied Physics and JST, TRIP, Nagoya University, Chikusa, Nagoya 464-8603 (Japan)

    2009-10-15

    In order to clarify the mechanism of organic superconductor beta-(BDA-TTP) salts. We study the superconductivity under uniaxial compression with non-dimerized two-band Hubbard model. We have calculated the uniaxial compression dependence of T{sub c} by solving the Eliashberg's equation using the fluctuation exchange (FLEX) approximation. The transfer integral under the uniaxial compression was estimated by the extended Huckel method. We have found that non-monotonic behaviors of T{sub c} in experimental results under uniaxial compression are understood taking the spin frustration and spin fluctuation into account.

  1. The influence of kind of coating additive on the compressive strength of RCA-based concrete prepared by triple-mixing method

    Science.gov (United States)

    Urban, K.; Sicakova, A.

    2017-10-01

    The paper deals with the use of alternative powder additives (fly ash and fine fraction of recycled concrete) to improve the recycled concrete aggregate and this occurs directly in the concrete mixing process. Specific mixing process (triple mixing method) is applied as it is favourable for this goal. Results of compressive strength after 2 and 28 days of hardening are given. Generally, using powder additives for coating the coarse recycled concrete aggregate in the first stage of triple mixing resulted in decrease of compressive strength, comparing the cement. There is no very important difference between samples based on recycled concrete aggregate and those based on natural aggregate as far as the cement is used for coating. When using both the fly ash and recycled concrete powder, the kind of aggregate causes more significant differences in compressive strength, with the values of those based on the recycled concrete aggregate being worse.

  2. Excavation and drying of compressed peat; Tiivistetyn turpeen nosto ja kuivaus

    Energy Technology Data Exchange (ETDEWEB)

    Erkkilae, A.; Frilander, P.; Hillebrand, K.; Nurmi, H.

    1996-12-31

    The target of this three year (1993 - 1995) project was to improve the peat product-ion efficiency by developing an energy economical excavation method for compressed peat, by which it is possible to obtain best possible degree of compression and load from the DS-production point of view. It is possible to improve the degree of utilization of solar radiation in drying from 30 % to 40 %. The main research areas were drying of the compressed peat and peat compression. The third sub-task for 1995 was demonstration of the main parts of the method in laboratory scale. Experimental compressed peat (Compeat) drying models were made for peats Carex-peat H7, Carex-peat H5 and Carex-Sphagnum-peat H7. Compeat dried without turning in best circumstances in 34 % shorter time than milled layer made of the same peat turned twice, the initial moisture content being 4 kgH2OkgDS-1. In the tests carried out in 1995 with Carex-peat the compression had not corresponding effect on intensifying of the drying of peat. Compression of Carex-Sphagnum peat H7 increased the drying speed by about 10 % compared with the drying time of uncompressed milled layer. In the sprinkling test about 30-50 % of the sprinkled water was sucked into the compressed peat layer, while about 70 % of the rain is sucked into the corresponding uncompressed milled layer. Use of vibration decreased the energy consumption of the steel-surfaced nozzles about 20 % in the maximum, but the effect depend on the rotation speed of the macerator and the vibration power. In the new Compeat method (production method for compressed peat), developed in the research, the peat is loosened from the field surface by milling 3-5 cm thick layer of peat of moisture content 75-80 %

  3. Free-beam soliton self-compression in air

    Science.gov (United States)

    Voronin, A. A.; Mitrofanov, A. V.; Sidorov-Biryukov, D. A.; Fedotov, A. B.; Pugžlys, A.; Panchenko, V. Ya; Shumakova, V.; Ališauskas, S.; Baltuška, A.; Zheltikov, A. M.

    2018-02-01

    We identify a physical scenario whereby soliton transients generated in freely propagating laser beams within the regions of anomalous dispersion in air can be compressed as a part of their free-beam spatiotemporal evolution to yield few-cycle mid- and long-wavelength-infrared field waveforms, whose peak power is substantially higher than the peak power of the input pulses. We show that this free-beam soliton self-compression scenario does not require ionization or laser-induced filamentation, enabling high-throughput self-compression of mid- and long-wavelength-infrared laser pulses within a broad range of peak powers from tens of gigawatts up to the terawatt level. We also demonstrate that this method of pulse compression can be extended to long-range propagation, providing self-compression of high-peak-power laser pulses in atmospheric air within propagation ranges as long as hundreds of meters, suggesting new ways towards longer-range standoff detection and remote sensing.

  4. Expansion and compression shock wave calculation in pipes with the C.V.M. numerical method

    International Nuclear Information System (INIS)

    Raymond, P.; Caumette, P.; Le Coq, G.; Libmann, M.

    1983-03-01

    The Control Variables Method for fluid transients computations has been used to compute expansion and compression shock waves propagations. In this paper, first analytical solutions for shock wave and rarefaction wave propagation are detailed. Then after a rapid description of the C.V.M. technique and its stability and monotonicity properties, we will present some results about standard shock tube problem, reflection of shock wave, finally a comparison between experimental results obtained on the ELF facility and calculations is given

  5. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  6. JPEG2000 vs. full frame wavelet packet compression for smart card medical records.

    Science.gov (United States)

    Leehan, Joaquín Azpirox; Lerallut, Jean-Francois

    2006-01-01

    This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.

  7. Linearly and nonlinearly optimized weighted essentially non-oscillatory methods for compressible turbulence

    Science.gov (United States)

    Taylor, Ellen Meredith

    Weighted essentially non-oscillatory (WENO) methods have been developed to simultaneously provide robust shock-capturing in compressible fluid flow and avoid excessive damping of fine-scale flow features such as turbulence. This is accomplished by constructing multiple candidate numerical stencils that adaptively combine so as to provide high order of accuracy and high bandwidth-resolving efficiency in continuous flow regions while averting instability-provoking interpolation across discontinuities. Under certain conditions in compressible turbulence, however, numerical dissipation remains unacceptably high even after optimization of the linear optimal stencil combination that dominates in smooth regions. The remaining nonlinear error arises from two primary sources: (i) the smoothness measurement that governs the application of adaptation away from the optimal stencil and (ii) the numerical properties of individual candidate stencils that govern numerical accuracy when adaptation engages. In this work, both of these sources are investigated, and corrective modifications to the WENO methodology are proposed and evaluated. Excessive nonlinear error due to the first source is alleviated through two separately considered procedures appended to the standard smoothness measurement technique that are designated the "relative smoothness limiter" and the "relative total variation limiter." In theory, appropriate values of their associated parameters should be insensitive to flow configuration, thereby sidestepping the prospect of costly parameter tuning; and this expectation of broad effectiveness is assessed in direct numerical simulations (DNS) of one-dimensional inviscid test problems, three-dimensional compressible isotropic turbulence of varying Reynolds and turbulent Mach numbers, and shock/isotropic-turbulence interaction (SITI). In the process, tools for efficiently comparing WENO adaptation behavior in smooth versus shock-containing regions are developed. The

  8. Low-Complexity Spatial-Temporal Filtering Method via Compressive Sensing for Interference Mitigation in a GNSS Receiver

    Directory of Open Access Journals (Sweden)

    Chung-Liang Chang

    2014-01-01

    Full Text Available A compressive sensing based array processing method is proposed to lower the complexity, and computation load of array system and to maintain the robust antijam performance in global navigation satellite system (GNSS receiver. Firstly, the spatial and temporal compressed matrices are multiplied with array signal, which results in a small size array system. Secondly, the 2-dimensional (2D minimum variance distortionless response (MVDR beamformer is employed in proposed system to mitigate the narrowband and wideband interference simultaneously. The iterative process is performed to find optimal spatial and temporal gain vector by MVDR approach, which enhances the steering gain of direction of arrival (DOA of interest. Meanwhile, the null gain is set at DOA of interference. Finally, the simulated navigation signal is generated offline by the graphic user interface tool and employed in the proposed algorithm. The theoretical analysis results using the proposed algorithm are verified based on simulated results.

  9. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  10. Estimates of post-acceleration longitudinal bunch compression

    International Nuclear Information System (INIS)

    Judd, D.L.

    1977-01-01

    A simple analytic method is developed, based on physical approximations, for treating transient implosive longitudinal compression of bunches of heavy ions in an accelerator system for ignition of inertial-confinement fusion pellet targets. Parametric dependences of attainable compressions and of beam path lengths and times during compression are indicated for ramped pulsed-gap lines, rf systems in storage and accumulator rings, and composite systems, including sections of free drift. It appears that for high-confidence pellets in a plant producing 1000 MW of electric power the needed pulse lengths cannot be obtained with rings alone unless an unreasonably large number of them are used, independent of choice of rf harmonic number. In contrast, pulsed-gap lines alone can meet this need. The effects of an initial inward compressive drift and of longitudinal emittance are included

  11. Real-time lossless compression of depth streams

    KAUST Repository

    Schneider, Jens

    2017-08-17

    Various examples are provided for lossless compression of data streams. In one example, a Z-lossless (ZLS) compression method includes generating compacted depth information by condensing information of a depth image and a compressed binary representation of the depth image using histogram compaction and decorrelating the compacted depth information to produce bitplane slicing of residuals by spatial prediction. In another example, an apparatus includes imaging circuitry that can capture one or more depth images and processing circuitry that can generate compacted depth information by condensing information of a captured depth image and a compressed binary representation of the captured depth image using histogram compaction; decorrelate the compacted depth information to produce bitplane slicing of residuals by spatial prediction; and generate an output stream based upon the bitplane slicing.

  12. Real-time lossless compression of depth streams

    KAUST Repository

    Schneider, Jens

    2017-01-01

    Various examples are provided for lossless compression of data streams. In one example, a Z-lossless (ZLS) compression method includes generating compacted depth information by condensing information of a depth image and a compressed binary representation of the depth image using histogram compaction and decorrelating the compacted depth information to produce bitplane slicing of residuals by spatial prediction. In another example, an apparatus includes imaging circuitry that can capture one or more depth images and processing circuitry that can generate compacted depth information by condensing information of a captured depth image and a compressed binary representation of the captured depth image using histogram compaction; decorrelate the compacted depth information to produce bitplane slicing of residuals by spatial prediction; and generate an output stream based upon the bitplane slicing.

  13. Digital storage and analysis of color Doppler echocardiograms

    Science.gov (United States)

    Chandra, S.; Thomas, J. D.

    1997-01-01

    Color Doppler flow mapping has played an important role in clinical echocardiography. Most of the clinical work, however, has been primarily qualitative. Although qualitative information is very valuable, there is considerable quantitative information stored within the velocity map that has not been extensively exploited so far. Recently, many researchers have shown interest in using the encoded velocities to address the clinical problems such as quantification of valvular regurgitation, calculation of cardiac output, and characterization of ventricular filling. In this article, we review some basic physics and engineering aspects of color Doppler echocardiography, as well as drawbacks of trying to retrieve velocities from video tape data. Digital storage, which plays a critical role in performing quantitative analysis, is discussed in some detail with special attention to velocity encoding in DICOM 3.0 (medical image storage standard) and the use of digital compression. Lossy compression can considerably reduce file size with minimal loss of information (mostly redundant); this is critical for digital storage because of the enormous amount of data generated (a 10 minute study could require 18 Gigabytes of storage capacity). Lossy JPEG compression and its impact on quantitative analysis has been studied, showing that images compressed at 27:1 using the JPEG algorithm compares favorably with directly digitized video images, the current goldstandard. Some potential applications of these velocities in analyzing the proximal convergence zones, mitral inflow, and some areas of future development are also discussed in the article.

  14. Measurement of compressed breast thickness by optical stereoscopic photogrammetry.

    Science.gov (United States)

    Tyson, Albert H; Mawdsley, Gordon E; Yaffe, Martin J

    2009-02-01

    The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of the breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.

  15. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  16. Large Eddy Simulation for Compressible Flows

    CERN Document Server

    Garnier, E; Sagaut, P

    2009-01-01

    Large Eddy Simulation (LES) of compressible flows is still a widely unexplored area of research. The authors, whose books are considered the most relevant monographs in this field, provide the reader with a comprehensive state-of-the-art presentation of the available LES theory and application. This book is a sequel to "Large Eddy Simulation for Incompressible Flows", as most of the research on LES for compressible flows is based on variable density extensions of models, methods and paradigms that were developed within the incompressible flow framework. The book addresses both the fundamentals and the practical industrial applications of LES in order to point out gaps in the theoretical framework as well as to bridge the gap between LES research and the growing need to use it in engineering modeling. After introducing the fundamentals on compressible turbulence and the LES governing equations, the mathematical framework for the filtering paradigm of LES for compressible flow equations is established. Instead ...

  17. SIMULATION OF MULTIPLEXING OF TWO PHASE SOIL IN CASE OF COMPRESSION COMPRESSION

    Directory of Open Access Journals (Sweden)

    G. E. Agakhanov

    2016-01-01

    Full Text Available Aim.The article is devoted to solving the problem of finding metodoa seal a two phase soil layer under compression compression uniformly distributed load.Methods.On estimated model of a continuous isotropic body with linear and hereditary creep in case of invariance of the environment and a persistence of coefficient of Poisson in time, and also taking into account different resilience of a skeleton of soil when multiplexing and demultiplexing the decision of the task of multiplexing of a layer of two-phase soil in case of compression is received by a uniformly distributed load. Special cases of the intense deformed status are considered.Results.The analysis of the received decision shows that in case of a persistence in time of coefficient of Poisson of the environment, creep doesn't influence tension, and only affects deformation or relocation (settling that corresponds to earlier set provisions. In case of a persistence of coefficient of Poisson the intense deformed status of the environment can be determined also by method of elastic analogy, solving the appropriate uprugomgnovenny problem. The solution of the equation for pore pressure is executed by Fourier method. According to the received analytical decision the flowchart and the program in Matlab packet with use of the built-in programming language of the Matlab system is made.Conclusion. For two options of conditions of drainage calculation of function of pore pressure, function of a side raspor and level of consolidation of a layer taking into account and without creep is executed and their surfaces of distribution and a graphics of change are constructed.

  18. Medical image compression and its application to TDIS-FILE equipment

    International Nuclear Information System (INIS)

    Tsubura, Shin-ichi; Nishihara, Eitaro; Iwai, Shunsuke

    1990-01-01

    In order to compress medical images for filing and communication, we have developed a compression algorithm which compresses images with remarkable quality using a high-pass filtering method. Hardware for this compression algorithm was also developed and applied to TDIS (total digital imaging system)-FILE equipment. In the future, hardware based on this algorithm will be developed for various types of diagnostic equipment and PACS. This technique has the following characteristics: (1) significant reduction of artifacts; (2) acceptable quality for clinical evaluation at 15:1 to 20:1 compression ratio; and (3) high-speed processing and compact hardware. (author)

  19. Dynamic mode decomposition for compressive system identification

    Science.gov (United States)

    Bai, Zhe; Kaiser, Eurika; Proctor, Joshua L.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data. In this work, we integrate and unify two recent innovations that extend DMD to systems with actuation and systems with heavily subsampled measurements. When combined, these methods yield a novel framework for compressive system identification, where it is possible to identify a low-order model from limited input-output data and reconstruct the associated full-state dynamic modes with compressed sensing, providing interpretability of the state of the reduced-order model. When full-state data is available, it is possible to dramatically accelerate downstream computations by first compressing the data. We demonstrate this unified framework on simulated data of fluid flow past a pitching airfoil, investigating the effects of sensor noise, different types of measurements (e.g., point sensors, Gaussian random projections, etc.), compression ratios, and different choices of actuation (e.g., localized, broadband, etc.). This example provides a challenging and realistic test-case for the proposed method, and results indicate that the dominant coherent structures and dynamics are well characterized even with heavily subsampled data.

  20. Shock compression of synthetic opal

    International Nuclear Information System (INIS)

    Inoue, A; Okuno, M; Okudera, H; Mashimo, T; Omurzak, E; Katayama, S; Koyano, M

    2010-01-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO 4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO 2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  1. Shock compression of synthetic opal

    Science.gov (United States)

    Inoue, A.; Okuno, M.; Okudera, H.; Mashimo, T.; Omurzak, E.; Katayama, S.; Koyano, M.

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  2. Shock compression of synthetic opal

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, A; Okuno, M; Okudera, H [Department of Earth Sciences, Kanazawa University Kanazawa, Ishikawa, 920-1192 (Japan); Mashimo, T; Omurzak, E [Shock Wave and Condensed Matter Research Center, Kumamoto University, Kumamoto, 860-8555 (Japan); Katayama, S; Koyano, M, E-mail: okuno@kenroku.kanazawa-u.ac.j [JAIST, Nomi, Ishikawa, 923-1297 (Japan)

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO{sub 4} tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO{sub 2} glass. However, internal silanole groups still remain even at 38.1 GPa.

  3. A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali

    2017-02-25

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.

  4. A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry

    KAUST Repository

    Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi

    2017-01-01

    We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.

  5. Composition-Structure-Property Relations of Compressed Borosilicate Glasses

    Science.gov (United States)

    Svenson, Mouritz N.; Bechgaard, Tobias K.; Fuglsang, Søren D.; Pedersen, Rune H.; Tjell, Anders Ø.; Østergaard, Martin B.; Youngman, Randall E.; Mauro, John C.; Rzoska, Sylwester J.; Bockowski, Michal; Smedskjaer, Morten M.

    2014-08-01

    Hot isostatic compression is an interesting method for modifying the structure and properties of bulk inorganic glasses. However, the structural and topological origins of the pressure-induced changes in macroscopic properties are not yet well understood. In this study, we report on the pressure and composition dependences of density and micromechanical properties (hardness, crack resistance, and brittleness) of five soda-lime borosilicate glasses with constant modifier content, covering the extremes from Na-Ca borate to Na-Ca silicate end members. Compression experiments are performed at pressures ≤1.0 GPa at the glass transition temperature in order to allow processing of large samples with relevance for industrial applications. In line with previous reports, we find an increasing fraction of tetrahedral boron, density, and hardness but a decreasing crack resistance and brittleness upon isostatic compression. Interestingly, a strong linear correlation between plastic (irreversible) compressibility and initial trigonal boron content is demonstrated, as the trigonal boron units are the ones most disposed for structural and topological rearrangements upon network compaction. A linear correlation is also found between plastic compressibility and the relative change in hardness with pressure, which could indicate that the overall network densification is responsible for the increase in hardness. Finally, we find that the micromechanical properties exhibit significantly different composition dependences before and after pressurization. The findings have important implications for tailoring microscopic and macroscopic structures of glassy materials and thus their properties through the hot isostatic compression method.

  6. Large breast compressions: Observations and evaluation of simulations

    Energy Technology Data Exchange (ETDEWEB)

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J. [Centre of Medical Image Computing, UCL, London WC1E 6BT, United Kingdom and Computer Vision Laboratory, ETH Zuerich, 8092 Zuerich (Switzerland); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Department of Imaging, UCL Hospital, London NW1 2BU (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom)

    2011-02-15

    Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast

  7. Large breast compressions: Observations and evaluation of simulations

    International Nuclear Information System (INIS)

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J.

    2011-01-01

    Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast

  8. Photonic compressive sensing enabled data efficient time stretch optical coherence tomography

    Science.gov (United States)

    Mididoddi, Chaitanya K.; Wang, Chao

    2018-03-01

    Photonic time stretch (PTS) has enabled real time spectral domain optical coherence tomography (OCT). However, this method generates a torrent of massive data at GHz stream rate, which requires capturing as per Nyquist principle. If the OCT interferogram signal is sparse in Fourier domain, which is always true for samples with limited number of layers, it can be captured at lower (sub-Nyquist) acquisition rate as per compressive sensing method. In this work we report a data compressed PTS-OCT system based on photonic compressive sensing with 66% compression with low acquisition rate of 50MHz and measurement speed of 1.51MHz per depth profile. A new method has also been proposed to improve the system with all-optical random pattern generation, which completely avoids electronic bottleneck in traditional binary pseudorandom binary sequence (PRBS) generators.

  9. Lattice Boltzmann methods for thermal flows: Continuum limit and applications to compressible Rayleigh Taylor systems

    NARCIS (Netherlands)

    Scagliarini, Andrea; Biferale, L.; Sbragaglia, M.; Sugiyama, K.; Toschi, F.

    2010-01-01

    We compute the continuum thermohydrodynamical limit of a new formulation of lattice kinetic equations for thermal compressible flows, recently proposed by Sbragaglia et al. [J. Fluid Mech. 628, 299 (2009)] . We show that the hydrodynamical manifold is given by the correct compressible

  10. An improved ghost-cell immersed boundary method for compressible flow simulations

    KAUST Repository

    Chi, Cheng; Lee, Bok Jik; Im, Hong G.

    2016-01-01

    This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary

  11. Lagrangian statistics in compressible isotropic homogeneous turbulence

    Science.gov (United States)

    Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi

    2011-11-01

    In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.

  12. Activated carbon from thermo-compressed wood and other lignocellulosic precursors

    Directory of Open Access Journals (Sweden)

    Capart, R.

    2007-05-01

    Full Text Available The effects of thermo-compression on the physical properties such as bulk density, mass yield, surface area, and also adsorption capacity of activated carbon were studied. The activated carbon samples were prepared from thermo-compressed and virgin fir-wood by two methods, a physical activation with CO2 and a chemical activation with KOH. A preliminary thermo-compression method seems an easy way to confer to a tender wood a bulk density almost three times larger than its initial density. Thermo-compression increased yield regardless of the mode of activation. The physical activation caused structural alteration, which enhanced the enlargement of micropores and even their degradation, leading to the formation of mesopores. Chemical activation conferred to activated carbon a heterogeneous and exclusively microporous nature. Moreover, when coupled to chemical activation, thermo-compression resulted in a satisfactory yield (23%, a high surface area (>1700 m2.g-1, and a good adsorption capacity for two model pollutants in aqueous solution: methylene blue and phenol. Activated carbon prepared from thermo-compressed wood exhibited a higher adsorption capacity for both the pollutants than did a commercial activated carbon.

  13. Transient Response of Thin Wire above a Layered Half-Space Using TDIE/FDTD Hybrid Method

    Directory of Open Access Journals (Sweden)

    Bing Wei

    2012-01-01

    Full Text Available The TDIE/FDTD hybrid method is applied to calculate the transient responses of thin wire above a lossy layered half-space. The time-domain reflection of the layered half space is computed by one-dimensional modified FDTD method. Then, transient response of thin wire induced by two excitation sources (the incident wave and reflected wave is calculated by TDIE method. Finally numerical results are given to illustrate the feasibility and high efficiency of the presented scheme.

  14. 3D Polygon Mesh Compression with Multi Layer Feed Forward Neural Networks

    Directory of Open Access Journals (Sweden)

    Emmanouil Piperakis

    2003-06-01

    Full Text Available In this paper, an experiment is conducted which proves that multi layer feed forward neural networks are capable of compressing 3D polygon meshes. Our compression method not only preserves the initial accuracy of the represented object but also enhances it. The neural network employed includes the vertex coordinates, the connectivity and normal information in one compact form, converting the discrete and surface polygon representation into an analytic, solid colloquial. Furthermore, the 3D object in its compressed neural form can be directly - without decompression - used for rendering. The neural compression - representation is viable to 3D transformations without the need of any anti-aliasing techniques - transformations do not disrupt the accuracy of the geometry. Our method does not su.er any scaling problem and was tested with objects of 300 to 107 polygons - such as the David of Michelangelo - achieving in all cases an order of O(b3 less bits for the representation than any other commonly known compression method. The simplicity of our algorithm and the established mathematical background of neural networks combined with their aptness for hardware implementation can establish this method as a good solution for polygon compression and if further investigated, a novel approach for 3D collision, animation and morphing.

  15. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  16. Simulation of moving boundaries interacting with compressible reacting flows using a second-order adaptive Cartesian cut-cell method

    Science.gov (United States)

    Muralidharan, Balaji; Menon, Suresh

    2018-03-01

    A high-order adaptive Cartesian cut-cell method, developed in the past by the authors [1] for simulation of compressible viscous flow over static embedded boundaries, is now extended for reacting flow simulations over moving interfaces. The main difficulty related to simulation of moving boundary problems using immersed boundary techniques is the loss of conservation of mass, momentum and energy during the transition of numerical grid cells from solid to fluid and vice versa. Gas phase reactions near solid boundaries can produce huge source terms to the governing equations, which if not properly treated for moving boundaries, can result in inaccuracies in numerical predictions. The small cell clustering algorithm proposed in our previous work is now extended to handle moving boundaries enforcing strict conservation. In addition, the cell clustering algorithm also preserves the smoothness of solution near moving surfaces. A second order Runge-Kutta scheme where the boundaries are allowed to change during the sub-time steps is employed. This scheme improves the time accuracy of the calculations when the body motion is driven by hydrodynamic forces. Simple one dimensional reacting and non-reacting studies of moving piston are first performed in order to demonstrate the accuracy of the proposed method. Results are then reported for flow past moving cylinders at subsonic and supersonic velocities in a viscous compressible flow and are compared with theoretical and previously available experimental data. The ability of the scheme to handle deforming boundaries and interaction of hydrodynamic forces with rigid body motion is demonstrated using different test cases. Finally, the method is applied to investigate the detonation initiation and stabilization mechanisms on a cylinder and a sphere, when they are launched into a detonable mixture. The effect of the filling pressure on the detonation stabilization mechanisms over a hyper-velocity sphere launched into a hydrogen

  17. An exact and consistent adjoint method for high-fidelity discretization of the compressible flow equations

    Science.gov (United States)

    Subramanian, Ramanathan Vishnampet Ganapathi

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvement. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs. Such methods have enabled sensitivity analysis and active control of turbulence at engineering flow conditions by providing gradient information at computational cost comparable to that of simulating the flow. They accelerate convergence of numerical design optimization algorithms, though this is predicated on the availability of an accurate gradient of the discretized flow equations. This is challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. We analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space--time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge--Kutta-like scheme

  18. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. 46 CFR 188.10-21 - Compressed gas.

    Science.gov (United States)

    2010-10-01

    ... PROVISIONS Definition of Terms Used in This Subchapter § 188.10-21 Compressed gas. This term includes any... by the Reid method covered by the American Society for Testing Materials Method of Test for Vapor...

  20. A stable penalty method for the compressible Navier-Stokes equations: II: One-dimensional domain decomposition schemes

    DEFF Research Database (Denmark)

    Hesthaven, Jan

    1997-01-01

    This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions and as a res......This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions...... and as a result the patching of subdomains is local in space. The scheme is studied in detail for Burgers's equation and developed for the compressible Navier-Stokes equations in general curvilinear coordinates. The versatility of the proposed scheme for the compressible Navier-Stokes equations is illustrated...