ChIPWig: a random access-enabling lossless and lossy compression method for ChIP-seq data.
Ravanmehr, Vida; Kim, Minji; Wang, Zhiying; Milenkovic, Olgica
2018-03-15
Chromatin immunoprecipitation sequencing (ChIP-seq) experiments are inexpensive and time-efficient, and result in massive datasets that introduce significant storage and maintenance challenges. To address the resulting Big Data problems, we propose a lossless and lossy compression framework specifically designed for ChIP-seq Wig data, termed ChIPWig. ChIPWig enables random access, summary statistics lookups and it is based on the asymptotic theory of optimal point density design for nonuniform quantizers. We tested the ChIPWig compressor on 10 ChIP-seq datasets generated by the ENCODE consortium. On average, lossless ChIPWig reduced the file sizes to merely 6% of the original, and offered 6-fold compression rate improvement compared to bigWig. The lossy feature further reduced file sizes 2-fold compared to the lossless mode, with little or no effects on peak calling and motif discovery using specialized NarrowPeaks methods. The compression and decompression speed rates are of the order of 0.2 sec/MB using general purpose computers. The source code and binaries are freely available for download at https://github.com/vidarmehr/ChIPWig-v2, implemented in C ++. milenkov@illinois.edu. Supplementary data are available at Bioinformatics online.
Task-oriented lossy compression of magnetic resonance images
Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques
1996-04-01
A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.
Selectively Lossy, Lossless, and/or Error Robust Data Compression Method
National Oceanic and Atmospheric Administration, Department of Commerce — Lossless compression techniques provide efficient compression of hyperspectral satellite data. The present invention combines the advantages of a clustering with...
Lossy compression for Animated Web Visualisation
Prudden, R.; Tomlinson, J.; Robinson, N.; Arribas, A.
2017-12-01
This talk will discuss an technique for lossy data compression specialised for web animation. We set ourselves the challenge of visualising a full forecast weather field as an animated 3D web page visualisation. This data is richly spatiotemporal, however it is routinely communicated to the public as a 2D map, and scientists are largely limited to visualising data via static 2D maps or 1D scatter plots. We wanted to present Met Office weather forecasts in a way that represents all the generated data. Our approach was to repurpose the technology used to stream high definition videos. This enabled us to achieve high rates of compression, while being compatible with both web browsers and GPU processing. Since lossy compression necessarily involves discarding information, evaluating the results is an important and difficult problem. This is essentially a problem of forecast verification. The difficulty lies in deciding what it means for two weather fields to be "similar", as simple definitions such as mean squared error often lead to undesirable results. In the second part of the talk, I will briefly discuss some ideas for alternative measures of similarity.
Directory of Open Access Journals (Sweden)
Xiangwei Li
2014-12-01
Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.
Lossy compression of quality scores in genomic data.
Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew
2014-08-01
Next-generation sequencing technologies are revolutionizing medicine. Data from sequencing technologies are typically represented as a string of bases, an associated sequence of per-base quality scores and other metadata, and in aggregate can require a large amount of space. The quality scores show how accurate the bases are with respect to the sequencing process, that is, how confident the sequencer is of having called them correctly, and are the largest component in datasets in which they are retained. Previous research has examined how to store sequences of bases effectively; here we add to that knowledge by examining methods for compressing quality scores. The quality values originate in a continuous domain, and so if a fidelity criterion is introduced, it is possible to introduce flexibility in the way these values are represented, allowing lossy compression over the quality score data. We present existing compression options for quality score data, and then introduce two new lossy techniques. Experiments measuring the trade-off between compression ratio and information loss are reported, including quantifying the effect of lossy representations on a downstream application that carries out single nucleotide polymorphism and insert/deletion detection. The new methods are demonstrably superior to other techniques when assessed against the spectrum of possible trade-offs between storage required and fidelity of representation. An implementation of the methods described here is available at https://github.com/rcanovas/libCSAM. rcanovas@student.unimelb.edu.au Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Spectral Distortion in Lossy Compression of Hyperspectral Data
Directory of Open Access Journals (Sweden)
Bruno Aiazzi
2012-01-01
Full Text Available Distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is investigated, with the aim of minimizing the spectral distortion between original and decompressed data. The absolute angular error, or spectral angle mapper (SAM, is used to quantify spectral distortion, while radiometric distortions are measured by maximum absolute deviation (MAD for near-lossless methods, for example, differential pulse code modulation (DPCM, or mean-squared error (MSE for lossy methods, for example, spectral decorrelation followed by JPEG 2000. Two strategies of interband distortion allocation are compared: given a target average bit rate, distortion may be set to be constant with wavelength. Otherwise, it may be allocated proportionally to the noise level of each band, according to the virtually lossless protocol. Comparisons with the uncompressed originals show that the average SAM of radiance spectra is minimized by constant distortion allocation to radiance data. However, variable distortion allocation according to the virtually lossless protocol yields significantly lower SAM in case of reflectance spectra obtained from compressed radiance data, if compared with the constant distortion allocation at the same compression ratio.
StirMark Benchmark: audio watermarking attacks based on lossy compression
Steinebach, Martin; Lang, Andreas; Dittmann, Jana
2002-04-01
StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.
Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing
Directory of Open Access Journals (Sweden)
Liantao Wu
2015-08-01
Full Text Available Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links.
Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data
Energy Technology Data Exchange (ETDEWEB)
Di, Sheng; Cappello, Franck
2018-01-01
Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.
Using off-the-shelf lossy compression for wireless home sleep staging.
Lan, Kun-Chan; Chang, Da-Wei; Kuo, Chih-En; Wei, Ming-Zhi; Li, Yu-Hung; Shaw, Fu-Zen; Liang, Sheng-Fu
2015-05-15
Recently, there has been increasing interest in the development of wireless home sleep staging systems that allow the patient to be monitored remotely while remaining in the comfort of their home. However, transmitting large amount of Polysomnography (PSG) data over the Internet is an important issue needed to be considered. In this work, we aim to reduce the amount of PSG data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to classify sleep stages. We examine the effects of off-the-shelf lossy compression on an all-night PSG dataset from 20 healthy subjects, in the context of automated sleep staging. The popular compression method Set Partitioning in Hierarchical Trees (SPIHT) was used, and a range of compression levels was selected in order to compress the signals with various degrees of loss. In addition, a rule-based automatic sleep staging method was used to automatically classify the sleep stages. Considering the criteria of clinical usefulness, the experimental results show that the system can achieve more than 60% energy saving with a high accuracy (>84%) in classifying sleep stages by using a lossy compression algorithm like SPIHT. As far as we know, our study is the first that focuses how much loss can be tolerated in compressing complex multi-channel PSG data for sleep analysis. We demonstrate the feasibility of using lossy SPIHT compression for wireless home sleep staging. Copyright © 2015 Elsevier B.V. All rights reserved.
Progress with lossy compression of data from the Community Earth System Model
Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.
2017-12-01
Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.
Design of a receiver operating characteristic (ROC) study of 10:1 lossy image compression
Collins, Cary A.; Lane, David; Frank, Mark S.; Hardy, Michael E.; Haynor, David R.; Smith, Donald V.; Parker, James E.; Bender, Gregory N.; Kim, Yongmin
1994-04-01
The digital archiving system at Madigan Army Medical Center (MAMC) uses a 10:1 lossy data compression algorithm for most forms of computed radiography. A systematic study on the potential effect of lossy image compression on patient care has been initiated with a series of studies focused on specific diagnostic tasks. The studies are based upon the receiver operating characteristic (ROC) method of analysis for diagnostic systems. The null hypothesis is that observer performance with approximately 10:1 compressed and decompressed images is not different from using original, uncompressed images for detecting subtle pathologic findings seen on computed radiographs of bone, chest, or abdomen, when viewed on a high-resolution monitor. Our design involves collecting cases from eight pathologic categories. Truth is determined by committee using confirmatory studies performed during routine clinical practice whenever possible. Software has been developed to aid in case collection and to allow reading of the cases for the study using stand-alone Siemens Litebox workstations. Data analysis uses two methods, ROC analysis and free-response ROC (FROC) methods. This study will be one of the largest ROC/FROC studies of its kind and could benefit clinical radiology practice using PACS technology. The study design and results from a pilot FROC study are presented.
An Evaluation Framework for Lossy Compression of Genome Sequencing Quality Values
Alberti, Claudio; Daniels, Noah; Hernaez, Mikel; Voges, Jan; Goldfeder, Rachel L.; Hernandez-Lopez, Ana A.; Mattavelli, Marco; Berger, Bonnie
2016-01-01
This paper provides the specification and an initial validation of an evaluation framework for the comparison of lossy compressors of genome sequencing quality values. The goal is to define reference data, test sets, tools and metrics that shall be used to evaluate the impact of lossy compression of quality values on human genome variant calling. The functionality of the framework is validated referring to two state-of-the-art genomic compressors. This work has been spurred by the current act...
The effects of lossy compression on diagnostically relevant seizure information in EEG signals.
Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E
2013-01-01
This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.
Lossy image compression for digital medical imaging systems
Wilhelm, Paul S.; Haynor, David R.; Kim, Yongmin; Nelson, Alan C.; Riskin, Eve A.
1990-07-01
Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the originals and presents the results ofits application to four representative and promising compression methods. The methods examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit allocation. Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table. Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by the need for intensive computing to compress and decompress images. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024 CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9, 1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images on a Sony (1280 X 1024) monitor. The radiologists' subjective evaluations of image fidelity were compared to
A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images
Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo
2007-03-01
Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.
An Evaluation Framework for Lossy Compression of Genome Sequencing Quality Values.
Alberti, Claudio; Daniels, Noah; Hernaez, Mikel; Voges, Jan; Goldfeder, Rachel L; Hernandez-Lopez, Ana A; Mattavelli, Marco; Berger, Bonnie
2016-01-01
This paper provides the specification and an initial validation of an evaluation framework for the comparison of lossy compressors of genome sequencing quality values. The goal is to define reference data, test sets, tools and metrics that shall be used to evaluate the impact of lossy compression of quality values on human genome variant calling. The functionality of the framework is validated referring to two state-of-the-art genomic compressors. This work has been spurred by the current activity within the ISO/IEC SC29/WG11 technical committee (a.k.a. MPEG), which is investigating the possibility of starting a standardization activity for genomic information representation.
Lossy compression of TPC data and trajectory tracking efficiency for the ALICE experiment
International Nuclear Information System (INIS)
Nicolaucig, A.; Ivanov, M.; Mattavelli, M.
2003-01-01
In this paper a quasi-lossless algorithm for the on-line compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN is described. The algorithm is based on a lossy source code modeling technique, i.e. it is based on a source model which is lossy if samples of the TPC signal are considered one by one; conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse, representing the pulse charge and the time localization of the pulse. So as to evaluate the consequences of the error introduced by the lossy compression process, the results of the trajectory tracking algorithms that process data off-line after the experiment are analyzed, in particular, versus their sensibility to the noise introduced by the compression. Two different versions of these off-line algorithms are described, performing cluster finding and particle tracking. The results on how these algorithms are affected by the lossy compression are reported. Entropy coding can be applied to the set of events defined by the source model to reduce the bit rate to the corresponding source entropy. Using TPC simulated data according to the expected ALICE TPC performance, the compression algorithm achieves a data reduction in the range of 34.2% down to 23.7% of the original data rate depending on the desired precision on the pulse center of mass. The number of operations per input symbol required to implement the algorithm is relatively low, so that a real-time implementation of the compression process embedded in the TPC data acquisition chain using low-cost integrated electronics is a realistic option to effectively reduce the data storing cost of ALICE experiment
Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions
Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina
2002-01-01
OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.
Lossy compression of TPC data and trajectory tracking efficiency for the ALICE experiment
Nicolaucig, A; Mattavelli, M
2003-01-01
In this paper a quasi-lossless algorithm for the on-line compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN is described. The algorithm is based on a lossy source code modeling technique, i.e. it is based on a source model which is lossy if samples of the TPC signal are considered one by one; conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse, representing the pulse charge and the time localization of the pulse. So as to evaluate the consequences of the error introduced by the lossy compression process, the results of the trajectory tracking algorithms that process data off-line after the experiment are analyzed, in particular, versus their sensibility to the noise introduced by the compression. Two different versions of these off- line algorithms are described,...
The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations
Orf, L.
2017-12-01
In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress
Recent advances in lossy compression of scientific floating-point data
Lindstrom, P.
2017-12-01
With a continuing exponential trend in supercomputer performance, ever larger data sets are being generated through numerical simulation. Bandwidth and storage capacity are, however, not keeping pace with this increase in data size, causing significant data movement bottlenecks in simulation codes and substantial monetary costs associated with archiving vast volumes of data. Worse yet, ever smaller fractions of data generated can be stored for further analysis, where scientists frequently rely on decimating or averaging large data sets in time and/or space. One way to mitigate these problems is to employ data compression to reduce data volumes. However, lossless compression of floating-point data can achieve only very modest size reductions on the order of 10-50%. We present ZFP and FPZIP, two state-of-the-art lossy compressors for structured floating-point data that routinely achieve one to two orders of magnitude reduction with little to no impact on the accuracy of visualization and quantitative data analysis. We provide examples of the use of such lossy compressors in climate and seismic modeling applications to effectively accelerate I/O and reduce storage requirements. We further discuss how the design decisions behind these and other compressors impact error distributions and other statistical and differential properties, including derived quantities of interest relevant to each science application.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
The study of diagnostic accuracy of chest nodules by using different compression methods
International Nuclear Information System (INIS)
Liang Zhigang; Kuncheng, L.I.; Zhang Jinghong; Liu Shuliang
2005-01-01
Background: The purpose of this study was to compare the diagnostic accuracy of small nodules in the chest by using different compression methods. Method: Two radiologists with 5 years experience twice interpreted 39 chest images by using lossless and lossy compression methods. The time interval was 3 weeks. Each time the radiologists interpreted one kind of compressed images. The image browser used the Unisight software provided by Atlastiger Company in Shanghai. The interpreting results were analyzed by the ROCKIT software and the ROC curves were painted by Excel 2002. Results: In studies of receiver operating characteristics for scoring the presence or absence of nodules, the images with lossy compression method showed no statistical difference as compared with the images with lossless compression method. Conclusion: The diagnostic accuracy of chest nodules by using the lossless and lossy compression methods had no significant difference, we could use the lossy compression method to transmit and archive the chest images with nodules
García, Aday; Santos, Lucana; López, Sebastián.; Callicó, Gustavo M.; Lopez, Jose F.; Sarmiento, Roberto
2014-05-01
Efficient onboard satellite hyperspectral image compression represents a necessity and a challenge for current and future space missions. Therefore, it is mandatory to provide hardware implementations for this type of algorithms in order to achieve the constraints required for onboard compression. In this work, we implement the Lossy Compression for Exomars (LCE) algorithm on an FPGA by means of high-level synthesis (HSL) in order to shorten the design cycle. Specifically, we use CatapultC HLS tool to obtain a VHDL description of the LCE algorithm from C-language specifications. Two different approaches are followed for HLS: on one hand, introducing the whole C-language description in CatapultC and on the other hand, splitting the C-language description in functional modules to be implemented independently with CatapultC, connecting and controlling them by an RTL description code without HLS. In both cases the goal is to obtain an FPGA implementation. We explain the several changes applied to the original Clanguage source code in order to optimize the results obtained by CatapultC for both approaches. Experimental results show low area occupancy of less than 15% for a SRAM-based Virtex-5 FPGA and a maximum frequency above 80 MHz. Additionally, the LCE compressor was implemented into an RTAX2000S antifuse-based FPGA, showing an area occupancy of 75% and a frequency around 53 MHz. All these serve to demonstrate that the LCE algorithm can be efficiently executed on an FPGA onboard a satellite. A comparison between both implementation approaches is also provided. The performance of the algorithm is finally compared with implementations on other technologies, specifically a graphics processing unit (GPU) and a single-threaded CPU.
Sayood, K.; Chen, Y. C.; Wang, X.
1992-01-01
During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.
Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression
Lindstrom, Peter; Chen, Po; Lee, En-Jui
2016-08-01
Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.
Robust steganographic method utilizing properties of MJPEG compression standard
Directory of Open Access Journals (Sweden)
Jakub Oravec
2015-06-01
Full Text Available This article presents design of steganographic method, which uses video container as cover data. Video track was recorded by webcam and was further encoded by compression standard MJPEG. Proposed method also takes in account effects of lossy compression. The embedding process is realized by switching places of transform coefficients, which are computed by Discrete Cosine Transform. The article contains possibilities, used techniques, advantages and drawbacks of chosen solution. The results are presented at the end of the article.
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
Adiabatic passage for a lossy two-level quantum system by a complex time method
International Nuclear Information System (INIS)
Dridi, G; Guérin, S
2012-01-01
Using a complex time method with the formalism of Stokes lines, we establish a generalization of the Davis–Dykhne–Pechukas formula which gives in the adiabatic limit the transition probability of a lossy two-state system driven by an external frequency-chirped pulse-shaped field. The conditions that allow this generalization are derived. We illustrate the result with the dissipative Allen–Eberly and Rosen–Zener models. (paper)
Integrated Circuit Interconnect Lines on Lossy Silicon Substrate with Finite Element Method
Sarhan M. Musa,; Matthew N. O. Sadiku
2014-01-01
The silicon substrate has a significant effect on the inductance parameter of a lossy interconnect line on integrated circuit. It is essential to take this into account in determining the transmission line electrical parameters. In this paper, a new quasi-TEM capacitance and inductance analysis of multiconductor multilayer interconnects is successfully demonstrated using finite element method (FEM). We specifically illustrate the electrostatic modeling of single and coupled in...
Boiler: lossy compression of RNA-seq alignments using coverage vectors.
Pritt, Jacob; Langmead, Ben
2016-09-19
We describe Boiler, a new software tool for compressing and querying large collections of RNA-seq alignments. Boiler discards most per-read data, keeping only a genomic coverage vector plus a few empirical distributions summarizing the alignments. Since most per-read data is discarded, storage footprint is often much smaller than that achieved by other compression tools. Despite this, the most relevant per-read data can be recovered; we show that Boiler compression has only a slight negative impact on results given by downstream tools for isoform assembly and quantification. Boiler also allows the user to pose fast and useful queries without decompressing the entire file. Boiler is free open source software available from github.com/jpritt/boiler. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
A Lossy Counting-Based State of Charge Estimation Method and Its Application to Electric Vehicles
Directory of Open Access Journals (Sweden)
Hong Zhang
2015-12-01
Full Text Available Estimating the residual capacity or state-of-charge (SoC of commercial batteries on-line without destroying them or interrupting the power supply, is quite a challenging task for electric vehicle (EV designers. Many Coulomb counting-based methods have been used to calculate the remaining capacity in EV batteries or other portable devices. The main disadvantages of these methods are the cumulative error and the time-varying Coulombic efficiency, which are greatly influenced by the operating state (SoC, temperature and current. To deal with this problem, we propose a lossy counting-based Coulomb counting method for estimating the available capacity or SoC. The initial capacity of the tested battery is obtained from the open circuit voltage (OCV. The charging/discharging efficiencies, used for compensating the Coulombic losses, are calculated by the lossy counting-based method. The measurement drift, resulting from the current sensor, is amended with the distorted Coulombic efficiency matrix. Simulations and experimental results show that the proposed method is both effective and convenient.
Hyperspectral image compressing using wavelet-based method
Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng
2017-10-01
Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.
Ma, JiaLi; Zhang, TanTan; Dong, MingChui
2015-05-01
This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.
Double-compression method for biomedical images
Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana
2017-08-01
This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.
Directory of Open Access Journals (Sweden)
HU Zhijuan
2015-08-01
Full Text Available We study the cosmological inflation models driven by the rolling tachyon field which has a Born-Infeld-type action.We drive the Hamilton-Jacobi equation for the cosmological dynamics of tachyon inflation and the mode equations for the scalar and tensor perturbations of tachyon field and spacetime, then a solution under the slow-roll condition is given. In the end,a realistic model from string theory is discussed.
Modeling of Lossy Inductance in Moving-Coil Loudspeakers
DEFF Research Database (Denmark)
Kong, Xiao-Peng; Agerkvist, Finn T.; Zeng, Xin-Wu
2015-01-01
The electrical impedance of moving-coil loudspeakers is dominated by the lossy inductance in high frequency range. Using the equivalent electrical circuit method, a new model for the lossy inductance based on separate functions for the magnitude and phase of the impedance is presented. The electr......The electrical impedance of moving-coil loudspeakers is dominated by the lossy inductance in high frequency range. Using the equivalent electrical circuit method, a new model for the lossy inductance based on separate functions for the magnitude and phase of the impedance is presented...
Evaluation of mammogram compression efficiency
International Nuclear Information System (INIS)
Przelaskowski, A.; Surowski, P.; Kukula, A.
2005-01-01
Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the
Directory of Open Access Journals (Sweden)
A.A. Kobozeva
2016-09-01
Full Text Available The problem of detection of the digital image falsification results performed by cloning is considered – one of the most often used program tools implemented in all modern graphic editors. Aim: The aim of the work is further development of approach to the solution of a cloning detection problem having the cloned image saved in a lossy format, offered by authors earlier. Materials and Methods: Further development of a new approach to the solution of a problem of cloning results detection in the digital image is presented. Approach is based on the accounting of small changes of cylindrical body volume with the generatrix, that is parallel to the OZ axis, bounded above by the interpolating function plot for a matrix of brightness of the analyzed image, and bounded below by the XOY plane, during the compression process. Results: Adaptation of the offered approach to conditions of the cloned image compression with the arbitrary factor of compression quality is carried out (compression ratio. The approach solvency in the conditions of the cloned image compression according to the algorithms different from the JPEG standard is shown: JPEG2000, compression with use of low-rank approximations of the image matrix (matrix blocks. The results of computational experiment are given. It is shown that the developed approach can be used to detect the results of cloning in digital video in the conditions of lossy compression after cloning process.
Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach
Danyali, Habibiollah; Mertins, Alfred
2011-01-01
In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653
Subband Coding Methods for Seismic Data Compression
Kiely, A.; Pollara, F.
1995-01-01
This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
Optimal design of lossy bandgap structures
DEFF Research Database (Denmark)
Jensen, Jakob Søndergaard
2004-01-01
The method of topology optimization is used to design structures for wave propagation with one lossy material component. Optimized designs for scalar elastic waves are presented for mininimum wave transmission as well as for maximum wave energy dissipation. The structures that are obtained...... are of the 1D or 2D bandgap type depending on the objective and the material parameters....
A MODIFIED EMBEDDED ZERO-TREE WAVELET METHOD FOR MEDICAL IMAGE COMPRESSION
Directory of Open Access Journals (Sweden)
T. Celine Therese Jenny
2010-11-01
Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.
Logarithmic compression methods for spectral data
Dunham, Mark E.
2003-01-01
A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.
Subjective evaluation of compressed image quality
Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin
1992-05-01
Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.
2D-RBUC for efficient parallel compression of residuals
Đurđević, Đorđe M.; Tartalja, Igor I.
2018-02-01
In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.
A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.
Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong
2017-04-01
This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.
Survey of numerical methods for compressible fluids
Energy Technology Data Exchange (ETDEWEB)
Sod, G A
1977-06-01
The finite difference methods of Godunov, Hyman, Lax-Wendroff (two-step), MacCormack, Rusanov, the upwind scheme, the hybrid scheme of Harten and Zwas, the antidiffusion method of Boris and Book, and the artificial compression method of Harten are compared with the random choice known as Glimm's method. The methods are used to integrate the one-dimensional equations of gas dynamics for an inviscid fluid. The results are compared and demonstrate that Glimm's method has several advantages. 16 figs., 4 tables.
Correlation and image compression for limited-bandwidth CCD.
Energy Technology Data Exchange (ETDEWEB)
Thompson, Douglas G.
2005-07-01
As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.
Lagrangian particle method for compressible fluid dynamics
Samulyak, Roman; Wang, Xingyu; Chen, Hsin-Chiang
2018-06-01
A new Lagrangian particle method for solving Euler equations for compressible inviscid fluid or gas flows is proposed. Similar to smoothed particle hydrodynamics (SPH), the method represents fluid cells with Lagrangian particles and is suitable for the simulation of complex free surface/multiphase flows. The main contributions of our method, which is different from SPH in all other aspects, are (a) significant improvement of approximation of differential operators based on a polynomial fit via weighted least squares approximation and the convergence of prescribed order, (b) a second-order particle-based algorithm that reduces to the first-order upwind method at local extremal points, providing accuracy and long term stability, and (c) more accurate resolution of entropy discontinuities and states at free interfaces. While the method is consistent and convergent to a prescribed order, the conservation of momentum and energy is not exact and depends on the convergence order. The method is generalizable to coupled hyperbolic-elliptic systems. Numerical verification tests demonstrating the convergence order are presented as well as examples of complex multiphase flows.
A new method for simplification and compression of 3D meshes
Attene, Marco
2001-01-01
We focus on the lossy compression of manifold triangle meshes. Our SwingWrapper approach partitions the surface of an original mesh M into simply-connected regions, called triangloids. We compute a new mesh M'. Each triangle of M' is a close approximation of a pseudo-triangle of M. By construction, the connectivity of M' is fairly regular and can be compressed to less than a bit per triangle using EdgeBreaker or one of the other recently developed schemes. The locations of the vertices of M' ...
Meshless Method for Simulation of Compressible Flow
Nabizadeh Shahrebabak, Ebrahim
In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow
Edge-based compression of cartoon-like images with homogeneous diffusion
DEFF Research Database (Denmark)
Mainberger, Markus; Bruhn, Andrés; Weickert, Joachim
2011-01-01
Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compressio...
Superplastic boronizing of duplex stainless steel under dual compression method
International Nuclear Information System (INIS)
Jauhari, I.; Yusof, H.A.M.; Saidan, R.
2011-01-01
Highlights: → Superplastic boronizing. → Dual compression method has been developed. → Hard boride layer. → Bulk deformation was significantly thicker the boronized layer. → New data on boronizing could be expanded the application of DSS in industries. - Abstract: In this work, SPB of duplex stainless steel (DSS) under compression method is studied with the objective to produce ultra hard and thick boronized layer using minimal amount of boron powder and at a much faster boronizing time as compared to the conventional process. SPB is conducted under dual compression methods. In the first method DSS is boronized using a minimal amount of boron powder under a fix pre-strained compression condition throughout the process. The compression strain is controlled in such a way that plastic deformation is restricted at the surface asperities of the substrate in contact with the boron powder. In the second method, the boronized specimen taken from the first mode is compressed superplastically up to a certain compressive strain under a certain strain rate condition. The process in the second method is conducted without the present of boron powder. As compared with the conventional boronizing process, through this SPB under dual compression methods, a much harder and thicker boronized layer thickness is able to be produced using a minimal amount of boron powder.
Superplastic boronizing of duplex stainless steel under dual compression method
Energy Technology Data Exchange (ETDEWEB)
Jauhari, I., E-mail: iswadi@um.edu.my [Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia); Yusof, H.A.M.; Saidan, R. [Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur (Malaysia)
2011-10-25
Highlights: {yields} Superplastic boronizing. {yields} Dual compression method has been developed. {yields} Hard boride layer. {yields} Bulk deformation was significantly thicker the boronized layer. {yields} New data on boronizing could be expanded the application of DSS in industries. - Abstract: In this work, SPB of duplex stainless steel (DSS) under compression method is studied with the objective to produce ultra hard and thick boronized layer using minimal amount of boron powder and at a much faster boronizing time as compared to the conventional process. SPB is conducted under dual compression methods. In the first method DSS is boronized using a minimal amount of boron powder under a fix pre-strained compression condition throughout the process. The compression strain is controlled in such a way that plastic deformation is restricted at the surface asperities of the substrate in contact with the boron powder. In the second method, the boronized specimen taken from the first mode is compressed superplastically up to a certain compressive strain under a certain strain rate condition. The process in the second method is conducted without the present of boron powder. As compared with the conventional boronizing process, through this SPB under dual compression methods, a much harder and thicker boronized layer thickness is able to be produced using a minimal amount of boron powder.
High thermal conductivity lossy dielectric using co-densified multilayer configuration
Tiegs, Terry N.; Kiggans, Jr., James O.
2003-06-17
Systems and methods are described for loss dielectrics. A method of manufacturing a lossy dielectric includes providing at least one high dielectric loss layer and providing at least one high thermal conductivity-electrically insulating layer adjacent the at least one high dielectric loss layer and then densifying together. The systems and methods provide advantages because the lossy dielectrics are less costly and more environmentally friendly than the available alternatives.
Blind compressed sensing image reconstruction based on alternating direction method
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
Image splitting and remapping method for radiological image compression
Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.
1990-07-01
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.
Park, Sang-Sub
2014-01-01
The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).
Charge accumulation in lossy dielectrics: a review
DEFF Research Database (Denmark)
Rasmussen, Jørgen Knøster; McAllister, Iain Wilson; Crichton, George C
1999-01-01
At present, the phenomenon of charge accumulation in solid dielectrics is under intense experimental study. Using a field theoretical approach, we review the basis for charge accumulation in lossy dielectrics. Thereafter, this macroscopic approach is applied to planar geometries such that the mat......At present, the phenomenon of charge accumulation in solid dielectrics is under intense experimental study. Using a field theoretical approach, we review the basis for charge accumulation in lossy dielectrics. Thereafter, this macroscopic approach is applied to planar geometries...
Cosmological Particle Data Compression in Practice
Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.
2017-12-01
In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.
Application of PDF methods to compressible turbulent flows
Delarue, B. J.; Pope, S. B.
1997-09-01
A particle method applying the probability density function (PDF) approach to turbulent compressible flows is presented. The method is applied to several turbulent flows, including the compressible mixing layer, and good agreement is obtained with experimental data. The PDF equation is solved using a Lagrangian/Monte Carlo method. To accurately account for the effects of compressibility on the flow, the velocity PDF formulation is extended to include thermodynamic variables such as the pressure and the internal energy. The mean pressure, the determination of which has been the object of active research over the last few years, is obtained directly from the particle properties. It is therefore not necessary to link the PDF solver with a finite-volume type solver. The stochastic differential equations (SDE) which model the evolution of particle properties are based on existing second-order closures for compressible turbulence, limited in application to low turbulent Mach number flows. Tests are conducted in decaying isotropic turbulence to compare the performances of the PDF method with the Reynolds-stress closures from which it is derived, and in homogeneous shear flows, at which stage comparison with direct numerical simulation (DNS) data is conducted. The model is then applied to the plane compressible mixing layer, reproducing the well-known decrease in the spreading rate with increasing compressibility. It must be emphasized that the goal of this paper is not as much to assess the performance of models of compressibility effects, as it is to present an innovative and consistent PDF formulation designed for turbulent inhomogeneous compressible flows, with the aim of extending it further to deal with supersonic reacting flows.
Investigating low-frequency compression using the Grid method
DEFF Research Database (Denmark)
Fereczkowski, Michal; Dau, Torsten; MacDonald, Ewen
2016-01-01
in literature. Moreover, slopes of the low-level portions of the BM I/O functions estimated at 500 Hz were examined, to determine whether the 500-Hz off-frequency forward masking curves were affected by compression. Overall, the collected data showed a trend confirming the compressive behaviour. However......There is an ongoing discussion about whether the amount of cochlear compression in humans at low frequencies (below 1 kHz) is as high as that at higher frequencies. It is controversial whether the compression affects the slope of the off-frequency forward masking curves at those frequencies. Here......, the Grid method with a 2-interval 1-up 3-down tracking rule was applied to estimate forward masking curves at two characteristic frequencies: 500 Hz and 4000 Hz. The resulting curves and the corresponding basilar membrane input-output (BM I/O) functions were found to be comparable to those reported...
International Nuclear Information System (INIS)
Kang, Lae-Hyong; Lee, Dae-Oen; Han, Jae-Hung
2011-01-01
We introduce a new compression test method for piezoelectric materials to investigate changes in piezoelectric properties under the compressive stress condition. Until now, compression tests of piezoelectric materials have been generally conducted using bulky piezoelectric ceramics and pressure block. The conventional method using the pressure block for thin piezoelectric patches, which are used in unimorph or bimorph actuators, is prone to unwanted bending and buckling. In addition, due to the constrained boundaries at both ends, the observed piezoelectric behavior contains boundary effects. In order to avoid these problems, the proposed method employs two guide plates with initial longitudinal tensile stress. By removing the tensile stress after bonding a piezoelectric material between the guide layers, longitudinal compressive stress is induced in the piezoelectric layer. Using the compression test specimens, two important properties, which govern the actuation performance of the piezoelectric material, the piezoelectric strain coefficients and the elastic modulus, are measured to evaluate the effects of applied electric fields and re-poling. The results show that the piezoelectric strain coefficient d 31 increases and the elastic modulus decreases when high voltage is applied to PZT5A, and the compression in the longitudinal direction decreases the piezoelectric strain coefficient d 31 but does not affect the elastic modulus. We also found that the re-poling of the piezoelectric material increases the elastic modulus, but the piezoelectric strain coefficient d 31 is not changed much (slightly increased) by re-poling
Word aligned bitmap compression method, data structure, and apparatus
Energy Technology Data Exchange (ETDEWEB)
Wu, Kesheng; Shoshani, Arie; Otoo, Ekow
2004-12-14
The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.
Word aligned bitmap compression method, data structure, and apparatus
Wu, Kesheng; Shoshani, Arie; Otoo, Ekow
2004-12-14
The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.
Image quality (IQ) guided multispectral image compression
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
Technical note: New table look-up lossless compression method ...
African Journals Online (AJOL)
Technical note: New table look-up lossless compression method based on binary index archiving. ... International Journal of Engineering, Science and Technology ... This paper intends to present a common use archiver, made up following the dictionary technique and using the index archiving method as a simple and ...
Novel 3D Compression Methods for Geometry, Connectivity and Texture
Siddeq, M. M.; Rodrigues, M. A.
2016-06-01
A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.
Compressible cavitation with stochastic field method
Class, Andreas; Dumond, Julien
2012-11-01
Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.
Mammography image compression using Wavelet
International Nuclear Information System (INIS)
Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa
2004-01-01
Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)
The Diagonal Compression Field Method using Circular Fans
DEFF Research Database (Denmark)
Hansen, Thomas
2006-01-01
is a modification of the traditional method, the modification consisting of the introduction of circular fan stress fields. To ensure proper behaviour for the service load the -value ( = cot, where is the angle relative to the beam axis of the uniaxial concrete compression) chosen should not be too large...
METHOD AND APPARATUS FOR INSPECTION OF COMPRESSED DATA PACKAGES
DEFF Research Database (Denmark)
2008-01-01
to be transferred over the data network. The method comprises the steps of: a) extracting payload data from the payload part of the package, b) appending the extracted payload data to a stream of data, c) probing the data package header so as to determine the compression scheme that is applied to the payload data...
Privacy of a lossy bosonic memory channel
Energy Technology Data Exchange (ETDEWEB)
Ruggeri, Giovanna [Dipartimento di Fisica, Universita di Lecce, I-73100 Lecce (Italy)]. E-mail: ruggeri@le.infn.it; Mancini, Stefano [Dipartimento di Fisica, Universita di Camerino, I-62032 Camerino (Italy)]. E-mail: stefano.mancini@unicam.it
2007-03-12
We study the security of the information transmission between two honest parties realized through a lossy bosonic memory channel when losses are captured by a dishonest party. We then show that entangled inputs can enhance the private information of such a channel, which however does never overcome that of unentangled inputs in absence of memory.
Quantum optics of lossy asymmetric beam splitters
Uppu, Ravitej; Wolterink, Tom; Tentrup, Tristan Bernhard Horst; Pinkse, Pepijn Willemszoon Harry
2016-01-01
We theoretically investigate quantum interference of two single photons at a lossy asymmetric beam splitter, the most general passive 2×2 optical circuit. The losses in the circuit result in a non-unitary scattering matrix with a non-trivial set of constraints on the elements of the scattering
Privacy of a lossy bosonic memory channel
International Nuclear Information System (INIS)
Ruggeri, Giovanna; Mancini, Stefano
2007-01-01
We study the security of the information transmission between two honest parties realized through a lossy bosonic memory channel when losses are captured by a dishonest party. We then show that entangled inputs can enhance the private information of such a channel, which however does never overcome that of unentangled inputs in absence of memory
Combustion engine variable compression ratio apparatus and method
Lawrence,; Keith, E [Peoria, IL; Strawbridge, Bryan E [Dunlap, IL; Dutart, Charles H [Washington, IL
2006-06-06
An apparatus and method for varying a compression ratio of an engine having a block and a head mounted thereto. The apparatus and method includes a cylinder having a block portion and a head portion, a piston linearly movable in the block portion of the cylinder, a cylinder plug linearly movable in the head portion of the cylinder, and a valve located in the cylinder plug and operable to provide controlled fluid communication with the block portion of the cylinder.
The Diagonal Compression Field Method using Circular Fans
DEFF Research Database (Denmark)
Hansen, Thomas
2005-01-01
This paper presents a new design method, which is a modification of the diagonal compression field method, the modification consisting of the introduction of circular fan stress fields. The traditional method does not allow changes of the concrete compression direction throughout a given beam...... if equilibrium is strictly required. This is conservative, since it is not possible fully to utilize the concrete strength in regions with low shear stresses. The larger inclination (the smaller -value) of the uniaxial concrete stress the more transverse shear reinforcement is needed; hence it would be optimal...... if the -value for a given beam could be set to a low value in regions with high shear stresses and thereafter increased in regions with low shear stresses. Thus the shear reinforcement would be reduced and the concrete strength would be utilized in a better way. In the paper it is shown how circular fan stress...
Data compression considerations for detectors with local intelligence
International Nuclear Information System (INIS)
Garcia-Sciveres, M; Wang, X
2014-01-01
This note summarizes the outcome of discussions about how data compression considerations apply to tracking detectors with local intelligence. The method for analyzing data compression efficiency is taken from a previous publication and applied to module characteristics from the WIT2014 workshop. We explore local intelligence and coupled layer structures in the language of data compression. In this context the original intelligent tracker concept of correlating hits to find matches of interest and discard others is just a form of lossy data compression. We now explore how these features (intelligence and coupled layers) can be exploited for lossless compression, which could enable full readout at higher trigger rates than previously envisioned, or even triggerless
Rembrandt Kadrioru lossis / Jüri Hain
Hain, Jüri, 1941-
2000-01-01
Kadrioru lossi kuppelsaali tundmatu meistri laemaali maalimise ajalugu. Maali aluseks on Rembrandti maal "Diana suplus". Laemaal on maalitud Magdalena de Passe (1600-1638) vasegravüüri põhjal, mis on tehtud Rembrandti maalist. Rembrandt on lähtunud Antonio Tempesta (1555-1630) kahest gravüürist, viimane on lähtunud Otto van Veeni (1556-1629) maalist
Dyakonov surface waves in lossy metamaterials
Sorní Laserna, Josep; Naserpour, Mahin; Zapata Rodríguez, Carlos Javier; Miret Marí, Juan José
2015-01-01
We analyze the existence of localized waves in the vicinities of the interface between two dielectrics, provided one of them is uniaxial and lossy. We found two families of surface waves, one of them approaching the well-known Dyakonov surface waves (DSWs). In addition, a new family of wave fields exists which are tightly bound to the interface. Although its appearance is clearly associated with the dissipative character of the anisotropic material, the characteristic propagation length of su...
A GPU-accelerated implicit meshless method for compressible flows
Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng
2018-05-01
This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.
Quinary excitation method for pulse compression ultrasound measurements.
Cowell, D M J; Freear, S
2008-04-01
A novel switched excitation method for linear frequency modulated excitation of ultrasonic transducers in pulse compression systems is presented that is simple to realise, yet provides reduced signal sidelobes at the output of the matched filter compared to bipolar pseudo-chirp excitation. Pulse compression signal sidelobes are reduced through the use of simple amplitude tapering at the beginning and end of the excitation duration. Amplitude tapering using switched excitation is realised through the use of intermediate voltage switching levels, half that of the main excitation voltages. In total five excitation voltages are used creating a quinary excitation system. The absence of analogue signal generation and power amplifiers renders the excitation method attractive for applications with requirements such as a high channel count or low cost per channel. A systematic study of switched linear frequency modulated excitation methods with simulated and laboratory based experimental verification is presented for 2.25 MHz non-destructive testing immersion transducers. The signal to sidelobe noise level of compressed waveforms generated using quinary and bipolar pseudo-chirp excitation are investigated for transmission through a 0.5m water and kaolin slurry channel. Quinary linear frequency modulated excitation consistently reduces signal sidelobe power compared to bipolar excitation methods. Experimental results for transmission between two 2.25 MHz transducers separated by a 0.5m channel of water and 5% kaolin suspension shows improvements in signal to sidelobe noise power in the order of 7-8 dB. The reported quinary switched method for linear frequency modulated excitation provides improved performance compared to pseudo-chirp excitation without the need for high performance excitation amplifiers.
Biometric and Emotion Identification: An ECG Compression Based Method
Directory of Open Access Journals (Sweden)
Susana Brás
2018-04-01
Full Text Available We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG. The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1 conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2 conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3 identification of the ECG record class, using a 1-NN (nearest neighbor classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.
Biometric and Emotion Identification: An ECG Compression Based Method.
Brás, Susana; Ferreira, Jacqueline H T; Soares, Sandra C; Pinho, Armando J
2018-01-01
We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model.
Biometric and Emotion Identification: An ECG Compression Based Method
Brás, Susana; Ferreira, Jacqueline H. T.; Soares, Sandra C.; Pinho, Armando J.
2018-01-01
We present an innovative and robust solution to both biometric and emotion identification using the electrocardiogram (ECG). The ECG represents the electrical signal that comes from the contraction of the heart muscles, indirectly representing the flow of blood inside the heart, it is known to convey a key that allows biometric identification. Moreover, due to its relationship with the nervous system, it also varies as a function of the emotional state. The use of information-theoretic data models, associated with data compression algorithms, allowed to effectively compare ECG records and infer the person identity, as well as emotional state at the time of data collection. The proposed method does not require ECG wave delineation or alignment, which reduces preprocessing error. The method is divided into three steps: (1) conversion of the real-valued ECG record into a symbolic time-series, using a quantization process; (2) conditional compression of the symbolic representation of the ECG, using the symbolic ECG records stored in the database as reference; (3) identification of the ECG record class, using a 1-NN (nearest neighbor) classifier. We obtained over 98% of accuracy in biometric identification, whereas in emotion recognition we attained over 90%. Therefore, the method adequately identify the person, and his/her emotion. Also, the proposed method is flexible and may be adapted to different problems, by the alteration of the templates for training the model. PMID:29670564
Medical Image Compression Based on Region of Interest, With Application to Colon CT Images
National Research Council Canada - National Science Library
Gokturk, Salih
2001-01-01
...., in diagnostically important regions. This paper discusses a hybrid model of lossless compression in the region of interest, with high-rate, motion-compensated, lossy compression in other regions...
On the estimation method of compressed air consumption during pneumatic caisson sinking
平川, 修治; ヒラカワ, シュウジ; Shuji, HIRAKAWA
1990-01-01
There are several methods in estimation of compressed air consumption during pneumatic caisson sinking. It is re uired in the estimation of compressed air consumption by the methods under the same conditions. In this paper, it is proposed the methods which is able to estimate accurately the compressed air consumption during pnbumatic caissons sinking at this moment.
A Finite Element Method for Simulation of Compressible Cavitating Flows
Shams, Ehsan; Yang, Fan; Zhang, Yu; Sahni, Onkar; Shephard, Mark; Oberai, Assad
2016-11-01
This work focuses on a novel approach for finite element simulations of multi-phase flows which involve evolving interface with phase change. Modeling problems, such as cavitation, requires addressing multiple challenges, including compressibility of the vapor phase, interface physics caused by mass, momentum and energy fluxes. We have developed a mathematically consistent and robust computational approach to address these problems. We use stabilized finite element methods on unstructured meshes to solve for the compressible Navier-Stokes equations. Arbitrary Lagrangian-Eulerian formulation is used to handle the interface motions. Our method uses a mesh adaptation strategy to preserve the quality of the volumetric mesh, while the interface mesh moves along with the interface. The interface jump conditions are accurately represented using a discontinuous Galerkin method on the conservation laws. Condensation and evaporation rates at the interface are thermodynamically modeled to determine the interface velocity. We will present initial results on bubble cavitation the behavior of an attached cavitation zone in a separated boundary layer. We acknowledge the support from Army Research Office (ARO) under ARO Grant W911NF-14-1-0301.
A Proposal for Kelly CriterionBased Lossy Network Compression
2016-03-01
detection applications. Most of these applications only send alerts to the central analysis servers. These alerts do not provide the forensic capability...based intrusion detection systems. These systems tend to examine the indi- vidual system’s audit logs looking for intrusive activity. The notable
Iterative methods for compressible Navier-Stokes and Euler equations
Energy Technology Data Exchange (ETDEWEB)
Tang, W.P.; Forsyth, P.A.
1996-12-31
This workshop will focus on methods for solution of compressible Navier-Stokes and Euler equations. In particular, attention will be focused on the interaction between the methods used to solve the non-linear algebraic equations (e.g. full Newton or first order Jacobian) and the resulting large sparse systems. Various types of block and incomplete LU factorization will be discussed, as well as stability issues, and the use of Newton-Krylov methods. These techniques will be demonstrated on a variety of model transonic and supersonic airfoil problems. Applications to industrial CFD problems will also be presented. Experience with the use of C++ for solution of large scale problems will also be discussed. The format for this workshop will be four fifteen minute talks, followed by a roundtable discussion.
Methods for Sampling and Measurement of Compressed Air Contaminants
International Nuclear Information System (INIS)
Stroem, L.
1976-10-01
In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit
Methods for Sampling and Measurement of Compressed Air Contaminants
Energy Technology Data Exchange (ETDEWEB)
Stroem, L
1976-10-15
In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit
Method for Calculation of Steam-Compression Heat Transformers
Directory of Open Access Journals (Sweden)
S. V. Zditovetckaya
2012-01-01
Full Text Available The paper considers a method for joint numerical analysis of cycle parameters and heatex-change equipment of steam-compression heat transformer contour that takes into account a non-stationary operational mode and irreversible losses in devices and pipeline contour. The method has been realized in the form of the software package and can be used while making design or selection of a heat transformer with due account of a coolant and actual equipment being included in its structure.The paper presents investigation results revealing influence of pressure loss in an evaporator and a condenser from the side of the coolant caused by a friction and local resistance on power efficiency of the heat transformer which is operating in the mode of refrigerating and heating installation and a thermal pump. Actually obtained operational parameters of the thermal pump in the nominal and off-design operatinal modes depend on the structure of the concrete contour equipment.
Negative refraction of inhomogeneous waves in lossy isotropic media
International Nuclear Information System (INIS)
Fedorov, V Yu; Nakajima, T
2014-01-01
We theoretically study negative refraction of inhomogeneous waves at the interface of lossy isotropic media. We obtain explicit (up to the sign) expressions for the parameters of a wave transmitted through the interface between two lossy media characterized by complex permittivity and permeability. We show that the criterion of negative refraction that requires negative permittivity and permeability can be used only in the case of a homogeneous incident wave at the interface between a lossless and lossy media. In a more general situation, when the incident wave is inhomogeneous, or both media are lossy, the criterion of negative refraction becomes dependent on an incidence angle. Most interestingly, we show that negative refraction can be realized in conventional lossy materials (such as metals) if their interfaces are properly oriented. (paper)
Data compression techniques and the ACR-NEMA digital interface communications standard
International Nuclear Information System (INIS)
Zielonka, J.S.; Blume, H.; Hill, D.; Horil, S.C.; Lodwick, G.S.; Moore, J.; Murphy, L.L.; Wake, R.; Wallace, G.
1987-01-01
Data compression offers the possibility of achieving high, effective information transfer rates between devices and of efficient utilization of digital storge devices in meeting department-wide archiving needs. Accordingly, the ARC-NEMA Digital Imaging and Communications Standards Committee established a Working Group to develop a means to incorporate the optimal use of a wide variety of current compression techniques while remaining compatible with the standard. This proposed method allows the use of public domain techniques, predetermined methods between devices already aware of the selected algorithm, and the ability for the originating device to specify algorithms and parameters prior to transmitting compressed data. Because of the latter capability, the technique has the potential for supporting many compression algorithms not yet developed or in common use. Both lossless and lossy methods can be implemented. In addition to description of the overall structure of this proposal, several examples using current compression algorithms are given
The production of fully deacetylated chitosan by compression method
Directory of Open Access Journals (Sweden)
Xiaofei He
2016-03-01
Full Text Available Chitosan’s activities are significantly affected by degree of deacetylation (DDA, while fully deacetylated chitosan is difficult to produce in a large scale. Therefore, this paper introduces a compression method for preparing 100% deacetylated chitosan with less environmental pollution. The product is characterized by XRD, FT-IR, UV and HPLC. The 100% fully deacetylated chitosan is produced in low-concentration alkali and high-pressure conditions, which only requires 15% alkali solution and 1:10 chitosan powder to NaOH solution ratio under 0.11–0.12 MPa for 120 min. When the alkali concentration varied from 5% to 15%, the chitosan with ultra-high DDA value (up to 95% is produced.
Cloud Optimized Image Format and Compression
Becker, P.; Plesea, L.; Maurer, T.
2015-04-01
Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.
A method of loss free compression for the data of nuclear spectrum
International Nuclear Information System (INIS)
Sun Mingshan; Wu Shiying; Chen Yantao; Xu Zurun
2000-01-01
A new method of loss free compression based on the feature of the data of nuclear spectrum is provided, from which a practicable algorithm is successfully derived. A compression rate varying from 0.50 to 0.25 is obtained and the distribution of the processed data becomes even more suitable to be reprocessed by another compression such as Huffman Code to improve the compression rate
Statistical Analysis of Compression Methods for Storing Binary Image for Low-Memory Systems
Directory of Open Access Journals (Sweden)
Roman Slaby
2013-01-01
Full Text Available The paper is focused on the statistical comparison of the selected compression methods which are used for compression of the binary images. The aim is to asses, which of presented compression method for low-memory system requires less number of bytes of memory. For assessment of the success rates of the input image to binary image the correlation functions are used. Correlation function is one of the methods of OCR algorithm used for the digitization of printed symbols. Using of compression methods is necessary for systems based on low-power micro-controllers. The data stream saving is very important for such systems with limited memory as well as the time required for decoding the compressed data. The success rate of the selected compression algorithms is evaluated using the basic characteristics of the exploratory analysis. The searched samples represent the amount of bytes needed to compress the test images, representing alphanumeric characters.
The Basic Principles and Methods of the System Approach to Compression of Telemetry Data
Levenets, A. V.
2018-01-01
The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.
Signal Compression in Automatic Ultrasonic testing of Rails
Directory of Open Access Journals (Sweden)
Tomasz Ciszewski
2007-01-01
Full Text Available Full recording of the most important information carried by the ultrasonic signals allows realizing statistical analysis of measurement data. Statistical analysis of the results gathered during automatic ultrasonic tests gives data which lead, together with use of features of measuring method, differential lossy coding and traditional method of lossless data compression (Huffman’s coding, dictionary coding, to a comprehensive, efficient data compression algorithm. The subject of the article is to present the algorithm and the benefits got by using it in comparison to alternative compression methods. Storage of large amount of data allows to create an electronic catalogue of ultrasonic defects. If it is created, the future qualification system training in the new solutions of the automat for test in rails will be possible.
Interpolation decoding method with variable parameters for fractal image compression
International Nuclear Information System (INIS)
He Chuanjiang; Li Gaoping; Shen Xiaona
2007-01-01
The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal
Methods for compressible multiphase flows and their applications
Kim, H.; Choe, Y.; Kim, H.; Min, D.; Kim, C.
2018-06-01
This paper presents an efficient and robust numerical framework to deal with multiphase real-fluid flows and their broad spectrum of engineering applications. A homogeneous mixture model incorporated with a real-fluid equation of state and a phase change model is considered to calculate complex multiphase problems. As robust and accurate numerical methods to handle multiphase shocks and phase interfaces over a wide range of flow speeds, the AUSMPW+_N and RoeM_N schemes with a system preconditioning method are presented. These methods are assessed by extensive validation problems with various types of equation of state and phase change models. Representative realistic multiphase phenomena, including the flow inside a thermal vapor compressor, pressurization in a cryogenic tank, and unsteady cavitating flow around a wedge, are then investigated as application problems. With appropriate physical modeling followed by robust and accurate numerical treatments, compressible multiphase flow physics such as phase changes, shock discontinuities, and their interactions are well captured, confirming the suitability of the proposed numerical framework to wide engineering applications.
Speech Data Compression using Vector Quantization
H. B. Kekre; Tanuja K. Sarode
2008-01-01
Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...
International Nuclear Information System (INIS)
Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro
2008-01-01
This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data. (author)
Spectral Element Method for the Simulation of Unsteady Compressible Flows
Diosady, Laslo Tibor; Murman, Scott M.
2013-01-01
This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.
Analysis of the temporal electric fields in lossy dielectric media
DEFF Research Database (Denmark)
McAllister, Iain Wilson; Crichton, George C
1991-01-01
The time-dependent electric fields associated with lossy dielectric media are examined. The analysis illustrates that, with respect to the basic time constant, these lossy media can take a considerable time to attain a steady-state condition. Time-dependent field enhancement factors are considered......, and inherent surface-charge densities quantified. The calculation of electrostatic forces on a free, lossy dielectric particle is illustrated. An extension to the basic analysis demonstrates that, on reversal of polarity, the resultant tangential field at the interface could play a decisive role...
National Research Council Canada - National Science Library
Delgorge, C
2001-01-01
.... For the purpose of this work, we selected seven compression methods : Fourier Transform, Discrete Cosine Transform, Wavelets, Quadtrees Transform, Fractals, Histogram Thresholding, and Run Length Coding...
A statistical–mechanical view on source coding: physical compression and data compression
International Nuclear Information System (INIS)
Merhav, Neri
2011-01-01
We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics
A Posteriori Restoration of Block Transform-Compressed Data
Brown, R.; Boden, A. F.
1995-01-01
The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.
Fixed-Rate Compressed Floating-Point Arrays.
Lindstrom, Peter
2014-12-01
Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.
High Bit-Depth Medical Image Compression With HEVC.
Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor
2018-03-01
Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.
Analysis of a discrete element method and coupling with a compressible fluid flow method
International Nuclear Information System (INIS)
Monasse, L.
2011-01-01
This work aims at the numerical simulation of compressible fluid/deformable structure interactions. In particular, we have developed a partitioned coupling algorithm between a Finite Volume method for the compressible fluid and a Discrete Element method capable of taking into account fractures in the solid. A survey of existing fictitious domain methods and partitioned algorithms has led to choose an Embedded Boundary method and an explicit coupling scheme. We first showed that the Discrete Element method used for the solid yielded the correct macroscopic behaviour and that the symplectic time-integration scheme ensured the preservation of energy. We then developed an explicit coupling algorithm between a compressible inviscid fluid and an un-deformable solid. Mass, momentum and energy conservation and consistency properties were proved for the coupling scheme. The algorithm was then extended to the coupling with a deformable solid, in the form of a semi implicit scheme. Finally, we applied this method to unsteady inviscid flows around moving structures: comparisons with existing numerical and experimental results demonstrate the excellent accuracy of our method. (author) [fr
Fundamental study of compression for movie files of coronary angiography
Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie
2005-04-01
When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.
Partially blind instantly decodable network codes for lossy feedback environment
Sorour, Sameh; Douik, Ahmed S.; Valaee, Shahrokh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2014-01-01
an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both
Gelmini, A.; Gottardi, G.; Moriyama, T.
2017-10-01
This work presents an innovative computational approach for the inversion of wideband ground penetrating radar (GPR) data. The retrieval of the dielectric characteristics of sparse scatterers buried in a lossy soil is performed by combining a multi-task Bayesian compressive sensing (MT-BCS) solver and a frequency hopping (FH) strategy. The developed methodology is able to benefit from the regularization capabilities of the MT-BCS as well as to exploit the multi-chromatic informative content of GPR measurements. A set of numerical results is reported in order to assess the effectiveness of the proposed GPR inverse scattering technique, as well as to compare it to a simpler single-task implementation.
Acceleration methods for multi-physics compressible flow
Peles, Oren; Turkel, Eli
2018-04-01
In this work we investigate the Runge-Kutta (RK)/Implicit smoother scheme as a convergence accelerator for complex multi-physics flow problems including turbulent, reactive and also two-phase flows. The flows considered are subsonic, transonic and supersonic flows in complex geometries, and also can be either steady or unsteady flows. All of these problems are considered to be a very stiff. We then introduce an acceleration method for the compressible Navier-Stokes equations. We start with the multigrid method for pure subsonic flow, including reactive flows. We then add the Rossow-Swanson-Turkel RK/Implicit smoother that enables performing all these complex flow simulations with a reasonable CFL number. We next discuss the RK/Implicit smoother for time dependent problem and also for low Mach numbers. The preconditioner includes an intrinsic low Mach number treatment inside the smoother operator. We also develop a modified Roe scheme with a corresponding flux Jacobian matrix. We then give the extension of the method for real gas and reactive flow. Reactive flows are governed by a system of inhomogeneous Navier-Stokes equations with very stiff source terms. The extension of the RK/Implicit smoother requires an approximation of the source term Jacobian. The properties of the Jacobian are very important for the stability of the method. We discuss what the chemical physics theory of chemical kinetics tells about the mathematical properties of the Jacobian matrix. We focus on the implication of the Le-Chatelier's principle on the sign of the diagonal entries of the Jacobian. We present the implementation of the method for turbulent flow. We use a two RANS turbulent model - one equation model - Spalart-Allmaras and a two-equation model - k-ω SST model. The last extension is for two-phase flows with a gas as a main phase and Eulerian representation of a dispersed particles phase (EDP). We present some examples for such flow computations inside a ballistic evaluation
Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan
2018-05-01
The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.
Methods of compression of digital holograms, based on 1-level wavelet transform
International Nuclear Information System (INIS)
Kurbatova, E A; Cheremkhin, P A; Evtikhiev, N N
2016-01-01
To reduce the size of memory required for storing information about 3D-scenes and to decrease the rate of hologram transmission, digital hologram compression can be used. Compression of digital holograms by wavelet transforms is among most powerful methods. In the paper the most popular wavelet transforms are considered and applied to the digital hologram compression. Obtained values of reconstruction quality and hologram's diffraction efficiencies are compared. (paper)
An Enhanced Run-Length Encoding Compression Method for Telemetry Data
Directory of Open Access Journals (Sweden)
Shan Yanhu
2017-09-01
Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.
Douglas, David R [Newport News, VA; Tennant, Christopher D [Williamsburg, VA
2012-07-10
A method of avoiding CSR induced beam quality defects in free electron laser operation by a) controlling the rate of compression and b) using a novel means of integrating the compression with the remainder of the transport system: both are accomplished by means of dispersion modulation. A large dispersion is created in the penultimate dipole magnet of the compression region leading to rapid compression; this large dispersion is demagnified and dispersion suppression performed in a final small dipole. As a result, the bunch is short for only a small angular extent of the transport, and the resulting CSR excitation is small.
Compressed Sensing Methods in Radio Receivers Exposed to Noise and Interference
DEFF Research Database (Denmark)
Pierzchlewski, Jacek
, there is a problem of interference, which makes digitization of radio receivers even more dicult. High-order low-pass lters are needed to remove interfering signals and secure a high-quality reception. In the mid-2000s a new method of signal acquisition, called compressed sensing, emerged. Compressed sensing...... the downconverted baseband signal and interference, may be replaced by low-order lters. Additional digital signal processing is a price to pay for this feature. Hence, the signal processing is moved from the analog to the digital domain. Filtering compressed sensing, which is a new application of compressed sensing...
Radiologic image compression -- A review
International Nuclear Information System (INIS)
Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.
1995-01-01
The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs
Unsteady aerodynamic coefficients obtained by a compressible vortex lattice method.
Fabiano Hernandes
2009-01-01
Unsteady solutions for the aerodynamic coefficients of a thin airfoil in compressible subsonic or supersonic flows are studied. The lift, the pitch moment, and pressure coefficients are obtained numerically for the following motions: the indicial response (unit step function) of the airfoil, i.e., a sudden change in the angle of attack; a thin airfoil penetrating into a sharp edge gust (for several gust speed ratios); a thin airfoil penetrating into a one-minus-cosine gust and sinusoidal gust...
Fernández Pantoja, M.; Yarovoy, A.G.; Rubio Bretones, A.; González García, S.
2009-01-01
This paper presents a procedure to extend the methods of moments in time domain for the transient analysis of thin-wire antennas to include those cases where the antennas are located over a lossy half-space. This extended technique is based on the reflection coefficient (RC) approach, which
Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les
2012-12-01
This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.
Heier, W. C. (Inventor)
1974-01-01
A method is described for compression molding of thermosetting plastics composition. Heat is applied to the compressed load in a mold cavity and adjusted to hold molding temperature at the interface of the cavity surface and the compressed compound to produce a thermal front. This thermal front advances into the evacuated compound at mean right angles to the compression load and toward a thermal fence formed at the opposite surface of the compressed compound.
Image compression software for the SOHO LASCO and EIT experiments
Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis
1994-01-01
This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.
Guo, Wei; Tse, Peter W.
2013-01-01
Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect
A novel ECG data compression method based on adaptive Fourier decomposition
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
The boundary data immersion method for compressible flows with application to aeroacoustics
Energy Technology Data Exchange (ETDEWEB)
Schlanderer, Stefan C., E-mail: stefan.schlanderer@unimelb.edu.au [Faculty for Engineering and the Environment, University of Southampton, SO17 1BJ Southampton (United Kingdom); Weymouth, Gabriel D., E-mail: G.D.Weymouth@soton.ac.uk [Faculty for Engineering and the Environment, University of Southampton, SO17 1BJ Southampton (United Kingdom); Sandberg, Richard D., E-mail: richard.sandberg@unimelb.edu.au [Department of Mechanical Engineering, University of Melbourne, Melbourne VIC 3010 (Australia)
2017-03-15
This paper introduces a virtual boundary method for compressible viscous fluid flow that is capable of accurately representing moving bodies in flow and aeroacoustic simulations. The method is the compressible extension of the boundary data immersion method (BDIM, Maertens & Weymouth (2015), ). The BDIM equations for the compressible Navier–Stokes equations are derived and the accuracy of the method for the hydrodynamic representation of solid bodies is demonstrated with challenging test cases, including a fully turbulent boundary layer flow and a supersonic instability wave. In addition we show that the compressible BDIM is able to accurately represent noise radiation from moving bodies and flow induced noise generation without any penalty in allowable time step.
Dehbashi, Reza; Shahabadi, Mahmoud
2013-12-01
The commonly used coordinate transformation for cylindrical cloaks is generalized. This transformation is utilized to determine an anisotropic inhomogeneous diagonal material tensors of a shell type cloak for various material types, i.e., double-positive (DPS: ɛ, μ > 0), double-negative (DNG: ɛ, μ cloaking for various material types, a rigorous analysis is performed. It is shown that perfect cloaking will be achieved for same type material for the cloak and its surrounding medium. Moreover, material losses are included in the analysis to demonstrate that perfect cloaking for lossy materials can be achieved for identical loss tangent of the cloak and its surrounding material. Sensitivity of the cloaking performance to losses for different material types is also investigated. The obtained analytical results are verified using a Finite-Element computational analysis.
Image-Based Compression Method of Three-Dimensional Range Data with Texture
Chen, Xia; Bell, Tyler; Zhang, Song
2017-01-01
Recently, high speed and high accuracy three-dimensional (3D) scanning techniques and commercially available 3D scanning devices have made real-time 3D shape measurement and reconstruction possible. The conventional mesh representation of 3D geometry, however, results in large file sizes, causing difficulties for its storage and transmission. Methods for compressing scanned 3D data therefore become desired. This paper proposes a novel compression method which stores 3D range data within the c...
A new method of on-line multiparameter amplitude analysis with compression
International Nuclear Information System (INIS)
Morhac, M.; matousek, V.
1996-01-01
An algorithm of one-line multidimensional amplitude analysis with compression using fast adaptive orthogonal transform is presented in the paper. The method is based on a direct modification of multiplication coefficients of the signal flow graph of the fast Cooley-Tukey's algorithm. The coefficients are modified according to a reference vector representing the processed data. The method has been tested to compress three parameter experimental nuclear data. The efficiency of the derived adaptive transform is compared with classical orthogonal transforms. (orig.)
Lossy/lossless coding of bi-level images
DEFF Research Database (Denmark)
Martins, Bo; Forchhammer, Søren
1997-01-01
Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template...... as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy `rate...
International Nuclear Information System (INIS)
Zhigang Liang; Xiangying Du; Jiabin Liu; Yanhui Yang; Dongdong Rong; Xinyu Y ao; Kuncheng Li
2008-01-01
Background: The JPEG 2000 compression technique has recently been introduced into the medical imaging field. It is critical to understand the effects of this technique on the detection of breast masses on digitized images by human observers. Purpose: To evaluate whether lossless and lossy techniques affect the diagnostic results of malignant and benign breast masses on digitized mammograms. Material and Methods: A total of 90 screen-film mammograms including craniocaudal and lateral views obtained from 45 patients were selected by two non-observing radiologists. Of these, 22 cases were benign lesions and 23 cases were malignant. The mammographic films were digitized by a laser film digitizer, and compressed to three levels (lossless and lossy 20:1 and 40:1) using the JPEG 2000 wavelet-based image compression algorithm. Four radiologists with 10-12 years' experience in mammography interpreted the original and compressed images. The time interval was 3 weeks for each reading session. A five-point malignancy scale was used, with a score of 1 corresponding to definitely not a malignant mass, a score of 2 referring to not a malignant mass, a score of 3 meaning possibly a malignant mass, a score of 4 being probably a malignant mass, and a score of 5 interpreted as definitely a malignant mass. The radiologists' performance was evaluated using receiver operating characteristic analysis. Results: The average Az values for all radiologists decreased from 0.8933 for the original uncompressed images to 0.8299 for the images compressed at 40:1. This difference was not statistically significant. The detection accuracy of the original images was better than that of the compressed images, and the Az values decreased with increasing compression ratio. Conclusion: Digitized mammograms compressed at 40:1 could be used to substitute original images in the diagnosis of breast cancer
Decoherence in quantum lossy systems: superoperator and matrix techniques
Yazdanpanah, Navid; Tavassoly, Mohammad Kazem; Moya-Cessa, Hector Manuel
2017-06-01
Due to the unavoidably dissipative interaction between quantum systems with their environments, the decoherence flows inevitably into the systems. Therefore, to achieve a better understanding on how decoherence affects on the damped systems, a fundamental investigation of master equation seems to be required. In this regard, finding out the missed information which has been lost due to irreversibly of the dissipative systems, is also of practical importance in quantum information science. Motivating by these facts, in this work we want to use superoperator and matrix techniques, by which we are able to illustrate two methods to obtain the explicit form of density operators corresponding to damped systems at arbitrary temperature T ≥ 0. To establish the potential abilities of the suggested methods, we apply them to deduce the density operator of some practical well-known quantum systems. Using the superoperator techniques, at first we obtain the density operator of a damped system which includes a qubit interacting with a single-mode quantized field within an optical cavity. As the second system, we study the decoherence of a quantized field within an optical damped cavity. We also use our proposed matrix method to study the decoherence of a system which includes two qubits in the interaction with each other via dipole-dipole interaction and at the same time with a quantized field in a lossy cavity. The influences of dissipation on the decoherence of dynamical properties of these systems are also numerically investigated. At last, the advantages of the proposed superoperator techniques in comparison with matrix method are explained.
A method of automatic control of the process of compressing pyrogas in olefin production
Energy Technology Data Exchange (ETDEWEB)
Podval' niy, M.L.; Bobrovnikov, N.R.; Kotler, L.D.; Shib, L.M.; Tuchinskiy, M.R.
1982-01-01
In the known method of automatically controlling the process of compressing pyrogas in olefin production by regulating the supply of cooling agents to the interstage coolers of the compression unit depending on the flow of hydrocarbons to the compression unit, to raise performance by lowering deposition of polymers on the flow through surfaces of the equipment, the coolant supply is also regulated as a function of the flows of hydrocarbons from the upper and lower parts of the demethanizer and the bottoms of the stripping tower. The coolant supply is regulated proportional to the difference between the flow of stripping tower bottoms and the ratio of the hydrocarbon flow from the upper and lower parts of the demethanizer to the hydrocarbon flow in the compression unit. With an increase in the proportion of light hydrocarbons (sum of upper and lower demethanizer products) in the total flow of pyrogas going to compression, the flow of coolant to the compression unit is reduced. Condensation of the given fractions in the separators, their amount in condensate going through the piping to the stripping tower, is reduced. With the reduction in the proportion of light hydrocarbons in the pyrogas, the flow of coolant is increased, thus improving condensation of heavy hydrocarbons in the separators and removing them from the compression unit in the bottoms of the stripping tower.
International Nuclear Information System (INIS)
Xing-Yuan, Wang; Na, Zhang
2010-01-01
Coupled map lattices are taken as examples to study the synchronisation of spatiotemporal chaotic systems. First, a generalised synchronisation of two coupled map lattices is realised through selecting an appropriate feedback function and appropriate range of feedback parameter. Based on this method we use the phase compression method to extend the range of the parameter. So, we integrate the feedback control method with the phase compression method to implement the generalised synchronisation and obtain an exact range of feedback parameter. This technique is simple to implement in practice. Numerical simulations show the effectiveness and the feasibility of the proposed program. (general)
Analysis of time integration methods for the compressible two-fluid model for pipe flow simulations
B. Sanderse (Benjamin); I. Eskerud Smith (Ivar); M.H.W. Hendrix (Maurice)
2017-01-01
textabstractIn this paper we analyse different time integration methods for the two-fluid model and propose the BDF2 method as the preferred choice to simulate transient compressible multiphase flow in pipelines. Compared to the prevailing Backward Euler method, the BDF2 scheme has a significantly
A novel full-field experimental method to measure the local compressibility of gas diffusion media
Energy Technology Data Exchange (ETDEWEB)
Lai, Yeh-Hung; Li, Yongqiang [Electrochemical Energy Research Lab, GM R and D, Honeoye Falls, NY 14472 (United States); Rock, Jeffrey A. [GM Powertrain, Honeoye Falls, NY 14472 (United States)
2010-05-15
The gas diffusion medium (GDM) in a proton exchange membrane (PEM) fuel cell needs to simultaneously satisfy the requirements of transporting reactant gases, removing product water, conducting electrons and heat, and providing mechanical support to the membrane electrode assembly (MEA). Concerning the localized over-compression which may force carbon fibers and other conductive debris into the membrane to cause fuel cell failure by electronically shorting through the membrane, we have developed a novel full-field experimental method to measure the local thickness and compressibility of GDM. Applying a uniform air pressure upon a thin polyimide film bonded on the top surface of the GDM with support from the bottom by a flat metal substrate and measuring the thickness change using the 3-D digital image correlation technique with an out-of-plane displacement resolution less than 0.5 {mu}m, we have determined the local thickness and compressive stress/strain behavior in the GDM. Using the local thickness and compressibility data over an area of 11.2 mm x 11.2 mm, we numerically construct the nominal compressive response of a commercial Toray trademark TGP-H-060 based GDM subjected to compression by flat platens. Good agreement in the nominal stress/strain curves from the numerical construction and direct experimental flat-platen measurement confirms the validity of the methodology proposed in this article. The result shows that a nominal pressure of 1.4 MPa compressed between two flat platens can introduce localized compressive stress concentration of more than 3 MPa in up to 1% of the total area at various locations from several hundred micrometers to 1 mm in diameter. We believe that this full-field experimental method can be useful in GDM material and process development to reduce the local hard spots and help to mitigate the membrane shorting failure in PEM fuel cells. (author)
A novel full-field experimental method to measure the local compressibility of gas diffusion media
Lai, Yeh-Hung; Li, Yongqiang; Rock, Jeffrey A.
The gas diffusion medium (GDM) in a proton exchange membrane (PEM) fuel cell needs to simultaneously satisfy the requirements of transporting reactant gases, removing product water, conducting electrons and heat, and providing mechanical support to the membrane electrode assembly (MEA). Concerning the localized over-compression which may force carbon fibers and other conductive debris into the membrane to cause fuel cell failure by electronically shorting through the membrane, we have developed a novel full-field experimental method to measure the local thickness and compressibility of GDM. Applying a uniform air pressure upon a thin polyimide film bonded on the top surface of the GDM with support from the bottom by a flat metal substrate and measuring the thickness change using the 3-D digital image correlation technique with an out-of-plane displacement resolution less than 0.5 μm, we have determined the local thickness and compressive stress/strain behavior in the GDM. Using the local thickness and compressibility data over an area of 11.2 mm × 11.2 mm, we numerically construct the nominal compressive response of a commercial Toray™ TGP-H-060 based GDM subjected to compression by flat platens. Good agreement in the nominal stress/strain curves from the numerical construction and direct experimental flat-platen measurement confirms the validity of the methodology proposed in this article. The result shows that a nominal pressure of 1.4 MPa compressed between two flat platens can introduce localized compressive stress concentration of more than 3 MPa in up to 1% of the total area at various locations from several hundred micrometers to 1 mm in diameter. We believe that this full-field experimental method can be useful in GDM material and process development to reduce the local hard spots and help to mitigate the membrane shorting failure in PEM fuel cells.
A method of vehicle license plate recognition based on PCANet and compressive sensing
Ye, Xianyi; Min, Feng
2018-03-01
The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.
Low complexity lossless compression of underwater sound recordings.
Johnson, Mark; Partan, Jim; Hurst, Tom
2013-03-01
Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.
Compression-RSA: New approach of encryption and decryption method
Hung, Chang Ee; Mandangan, Arif
2013-04-01
Rivest-Shamir-Adleman (RSA) cryptosystem is a well known asymmetric cryptosystem and it has been applied in a very wide area. Many researches with different approaches have been carried out in order to improve the security and performance of RSA cryptosystem. The enhancement of the performance of RSA cryptosystem is our main interest. In this paper, we propose a new method to increase the efficiency of RSA by shortening the number of plaintext before it goes under encryption process without affecting the original content of the plaintext. Concept of simple Continued Fraction and the new special relationship between it and Euclidean Algorithm have been applied on this newly proposed method. By reducing the number of plaintext-ciphertext, the encryption-decryption processes of a secret message can be accelerated.
Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method
Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan
2018-04-01
Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
CFOA-Based Lossless and Lossy Inductance Simulators
Directory of Open Access Journals (Sweden)
F. Kaçar
2011-09-01
Full Text Available Inductance simulator is a useful component in the circuit synthesis theory especially for analog signal processing applications such as filter, chaotic oscillator design, analog phase shifters and cancellation of parasitic element. In this study, new four inductance simulator topologies employing a single current feedback operational amplifier are presented. The presented topologies require few passive components. The first topology is intended for negative inductance simulation, the second topology is for lossy series inductance, the third one is for lossy parallel inductance and the fourth topology is for negative parallel (-R (-L (-C simulation. The performance of the proposed CFOA based inductance simulators is demonstrated on both a second-order low-pass filter and inductance cancellation circuit. PSPICE simulations are given to verify the theoretical analysis.
Partially blind instantly decodable network codes for lossy feedback environment
Sorour, Sameh
2014-09-01
In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. When feedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packet combinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend the perfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations. For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partially blind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.
Retrofit device and method to improve humidity control of vapor compression cooling systems
Roth, Robert Paul; Hahn, David C.; Scaringe, Robert P.
2016-08-16
A method and device for improving moisture removal capacity of a vapor compression system is disclosed. The vapor compression system is started up with the evaporator blower initially set to a high speed. A relative humidity in a return air stream is measured with the evaporator blower operating at the high speed. If the measured humidity is above the predetermined high relative humidity value, the evaporator blower speed is reduced from the initially set high speed to the lowest possible speed. The device is a control board connected with the blower and uses a predetermined change in measured relative humidity to control the blower motor speed.
Fractal Image Compression Based on High Entropy Values Technique
Directory of Open Access Journals (Sweden)
Douaa Younis Abbaas
2018-04-01
Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.
International Nuclear Information System (INIS)
Kochemasov, G.G.
1992-01-01
Studies on the problem of laser fusion, which is mainly based on experiments conducted in the Iskra-4 device are reviewed. Different approaches to solution of the problem of DT-fuel ignition, methods of diagnostics of characteristics of laser radiation and plasma, occurring on microtarget heating and compression, are considered
The Effects of Different Curing Methods on the Compressive Strength of Terracrete
Directory of Open Access Journals (Sweden)
O. Alake
2009-01-01
Full Text Available This research evaluated the effects of different curing methods on the compressive strength of terracrete. Several tests that included sieve analysis were carried out on constituents of terracrete (granite and laterite to determine their particle size distribution and performance criteria tests to determine compressive strength of terracrete cubes for 7 to 35 days of curing. Sand, foam-soaked, tank and open methods of curing were used and the study was carried out under controlled temperature. Sixty cubes of 100 × 100 × 100mm sized cubes were cast using a mix ratio of 1 part of cement, 1½ part of latrite, and 3 part of coarse aggregate (granite proportioned by weight and water – cement ratio of 0.62. The result of the various compressive strengths of the cubes showed that out of the four curing methods, open method of curing was the best because the cubes gained the highest average compressive strength of 10.3N/mm2 by the 35th day.
Li, Q; He, Y L; Wang, Y; Tao, W Q
2007-11-01
A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.
Yuan, B.; Yu, Q.L.; Brouwers, H.J.H.
2015-01-01
This study investigates the reaction kinetics, the reaction products and the compressive strength of slag activated by ternary activators, namely waterglass, sodium hydroxide and sodium carbonate. Nine mixtures are designed by the Taguchi method considering the factors of sodium carbonate content
Semi-implicit method for three-dimensional compressible MHD simulation
International Nuclear Information System (INIS)
Harned, D.S.; Kerner, W.
1984-03-01
A semi-implicit method for solving the full compressible MHD equations in three dimensions is presented. The method is unconditionally stable with respect to the fast compressional modes. The time step is instead limited by the slower shear Alfven motion. The computing time required for one time step is essentially the same as for explicit methods. Linear stability limits are derived and verified by three-dimensional tests on linear waves in slab geometry. (orig.)
About a method for compressing x-ray computed microtomography data
Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš
2018-04-01
The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.
Branderhorst, Woutjan; de Groot, Jerry E.; van Lier, Monique G. J. T. B.; Highnam, Ralph P.; den Heeten, Gerard J.; Grimbergen, Cornelis A.
2017-01-01
Purpose: To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. Methods: For a
Hasar, U C
2009-05-01
A microcontroller-based noncontact and nondestructive microwave free-space measurement system for real-time and dynamic determination of complex permittivity of lossy liquid materials has been proposed. The system is comprised of two main sections--microwave and electronic. While the microwave section provides for measuring only the amplitudes of reflection coefficients, the electronic section processes these data and determines the complex permittivity using a general purpose microcontroller. The proposed method eliminates elaborate liquid sample holder preparation and only requires microwave components to perform reflection measurements from one side of the holder. In addition, it explicitly determines the permittivity of lossy liquid samples from reflection measurements at different frequencies without any knowledge on sample thickness. In order to reduce systematic errors in the system, we propose a simple calibration technique, which employs simple and readily available standards. The measurement system can be a good candidate for industrial-based applications.
An ROI multi-resolution compression method for 3D-HEVC
Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan
2017-09-01
3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.
A novel method for estimating soil precompression stress from uniaxial confined compression tests
DEFF Research Database (Denmark)
Lamandé, Mathieu; Schjønning, Per; Labouriau, Rodrigo
2017-01-01
. Stress-strain curves were obtained by performing uniaxial, confined compression tests on undisturbed soil cores for three soil types at three soil water potentials. The new method performed better than the Gompertz fitting method in estimating precompression stress. The values of precompression stress...... obtained from the new method were linearly related to the maximum stress experienced by the soil samples prior to the uniaxial, confined compression test at each soil condition with a slope close to 1. Precompression stress determined with the new method was not related to soil type or dry bulk density......The concept of precompression stress is used for estimating soil strength of relevance to fieldtraffic. It represents the maximum stress experienced by the soil. The most recently developed fitting method to estimate precompression stress (Gompertz) is based on the assumption of an S-shape stress...
Jinghai, Zhou; Tianbei, Kang; Fengchi, Wang; Xindong, Wang
2017-11-01
Eight less stirrups in the core area frame joints are simulated by ABAQUS finite element numerical software. The composite reinforcement method is strengthened with carbon fiber and increasing column section, the axial compression ratio of reinforced specimens is 0.3, 0.45 and 0.6 respectively. The results of the load-displacement curve, ductility and stiffness are analyzed, and it is found that the different axial compression ratio has great influence on the bearing capacity of increasing column section strengthening method, and has little influence on carbon fiber reinforcement method. The different strengthening schemes improve the ultimate bearing capacity and ductility of frame joints in a certain extent, composite reinforcement joints strengthening method to improve the most significant, followed by increasing column section, reinforcement method of carbon fiber reinforced joints to increase the minimum.
A blended pressure/density based method for the computation of incompressible and compressible flows
International Nuclear Information System (INIS)
Rossow, C.-C.
2003-01-01
An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation
Development of the town Viljandi in light of the studies at Lossi street / Eero Heinloo
Heinloo, Eero
2015-01-01
Uuringud näitasid, et varaseim püsiasustusele viitav ladestus on ulatunud Lossi ja Kauba tänavate ristist kuni Lossi tn 3 ja 4 hoonete vahelise alani. Hetkeseisus ei saa püsiasustuse algust dateerida varasemaks kui 13. sajandi keskpaik. Lossi tänava kujunemise võib dateerida 1270. või 1280. aastatesse. Hiljemalt 14. sajandi II veerandil on tänava puitsillutis asendatud munakivisillutisega.
FernáNdez Pantoja, M.; Yarovoy, A. G.; Rubio Bretones, A.; GonzáLez GarcíA, S.
2009-12-01
This paper presents a procedure to extend the methods of moments in time domain for the transient analysis of thin-wire antennas to include those cases where the antennas are located over a lossy half-space. This extended technique is based on the reflection coefficient (RC) approach, which approximates the fields incident on the ground interface as plane waves and calculates the time domain RC using the inverse Fourier transform of Fresnel equations. The implementation presented in this paper uses general expressions for the RC which extend its range of applicability to lossy grounds, and is proven to be accurate and fast for antennas located not too near to the ground. The resulting general purpose procedure, able to treat arbitrarily oriented thin-wire antennas, is appropriate for all kind of half-spaces, including lossy cases, and it has turned out to be as computationally fast solving the problem of an arbitrary ground as dealing with a perfect electric conductor ground plane. Results show a numerical validation of the method for different half-spaces, paying special attention to the influence of the antenna to ground distance in the accuracy of the results.
Numerical simulation of compressible two-phase flow using a diffuse interface method
International Nuclear Information System (INIS)
Ansari, M.R.; Daramizadeh, A.
2013-01-01
Highlights: ► Compressible two-phase gas–gas and gas–liquid flows simulation are conducted. ► Interface conditions contain shock wave and cavitations. ► A high-resolution diffuse interface method is investigated. ► The numerical results exhibit very good agreement with experimental results. -- Abstract: In this article, a high-resolution diffuse interface method is investigated for simulation of compressible two-phase gas–gas and gas–liquid flows, both in the presence of shock wave and in flows with strong rarefaction waves similar to cavitations. A Godunov method and HLLC Riemann solver is used for discretization of the Kapila five-equation model and a modified Schmidt equation of state (EOS) is used to simulate the cavitation regions. This method is applied successfully to some one- and two-dimensional compressible two-phase flows with interface conditions that contain shock wave and cavitations. The numerical results obtained in this attempt exhibit very good agreement with experimental results, as well as previous numerical results presented by other researchers based on other numerical methods. In particular, the algorithm can capture the complex flow features of transient shocks, such as the material discontinuities and interfacial instabilities, without any oscillation and additional diffusion. Numerical examples show that the results of the method presented here compare well with other sophisticated modeling methods like adaptive mesh refinement (AMR) and local mesh refinement (LMR) for one- and two-dimensional problems
Applicability of finite element method to collapse analysis of steel connection under compression
International Nuclear Information System (INIS)
Zhou, Zhiguang; Nishida, Akemi; Kuwamura, Hitoshi
2010-01-01
It is often necessary to study the collapse behavior of steel connections. In this study, the limit load of the steel pyramid-to-tube socket connection subjected to uniform compression was investigated by means of FEM and experiment. The steel connection was modeled using 4-node shell element. Three kinds of analysis were conducted: linear buckling, nonlinear buckling and modified Riks method analysis. For linear buckling analysis the linear eigenvalue analysis was done. For nonlinear buckling analysis, eigenvalue analysis was performed for buckling load in a nonlinear manner based on the incremental stiffness matrices, and nonlinear material properties and large displacement were considered. For modified Riks method analysis compressive load was loaded by using the modified Riks method, and nonlinear material properties and large displacement were considered. The results of FEM analyses were compared with the experimental results. It shows that nonlinear buckling and modified Riks method analyses are more accurate than linear buckling analysis because they employ nonlinear, large-deflection analysis to estimate buckling loads. Moreover, the calculated limit loads from nonlinear buckling and modified Riks method analysis are close. It can be concluded that modified Riks method analysis is more effective for collapse analysis of steel connection under compression. At last, modified Riks method analysis is used to do the parametric studies of the thickness of the pyramid. (author)
Edge-Based Image Compression with Homogeneous Diffusion
Mainberger, Markus; Weickert, Joachim
It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.
Li, Kewei; Ogden, Ray W; Holzapfel, Gerhard A
2018-01-01
Recently, micro-sphere-based methods derived from the angular integration approach have been used for excluding fibres under compression in the modelling of soft biological tissues. However, recent studies have revealed that many of the widely used numerical integration schemes over the unit sphere are inaccurate for large deformation problems even without excluding fibres under compression. Thus, in this study, we propose a discrete fibre dispersion model based on a systematic method for discretizing a unit hemisphere into a finite number of elementary areas, such as spherical triangles. Over each elementary area, we define a representative fibre direction and a discrete fibre density. Then, the strain energy of all the fibres distributed over each elementary area is approximated based on the deformation of the representative fibre direction weighted by the corresponding discrete fibre density. A summation of fibre contributions over all elementary areas then yields the resultant fibre strain energy. This treatment allows us to exclude fibres under compression in a discrete manner by evaluating the tension-compression status of the representative fibre directions only. We have implemented this model in a finite-element programme and illustrate it with three representative examples, including simple tension and simple shear of a unit cube, and non-homogeneous uniaxial extension of a rectangular strip. The results of all three examples are consistent and accurate compared with the previously developed continuous fibre dispersion model, and that is achieved with a substantial reduction of computational cost. © 2018 The Author(s).
An Improved Ghost-cell Immersed Boundary Method for Compressible Inviscid Flow Simulations
Chi, Cheng
2015-05-01
This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. In addition, a shock sensor is in- troduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently. The improved ghost-cell method is validated against five test cases: (a) double Mach reflections on a ramp, (b) supersonic flows in a wind tunnel with a forward- facing step, (c) supersonic flows over a circular cylinder, (d) smooth Prandtl-Meyer expansion flows, and (e) steady shock-induced combustion over a wedge. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Implementation of the improved ghost-cell method in reacting Euler flows further validates its general applicability for compressible flow simulations.
Effects on MR images compression in tissue classification quality
International Nuclear Information System (INIS)
Santalla, H; Meschino, G; Ballarin, V
2007-01-01
It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of 'quality' is essential. What we understand for 'quality'? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images
Application of content-based image compression to telepathology
Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace
2002-05-01
Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.
Analysis of tractable distortion metrics for EEG compression applications
International Nuclear Information System (INIS)
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Cárdenas-Barrera, Julián
2012-01-01
Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio. (paper)
International Nuclear Information System (INIS)
Shimoda, Chiaki; Matsuyama, Kanae; Okabe, Hirofumi; Kaneko, Masaaki; Miyamoto, Shinya
2017-01-01
Geopolymer solidification is a good method for managing waste because of it is inexpensive as compared with vitrification and has a reduced risk of hydrogen generation. In general, when geopolymers are made, water is added to the geopolymer raw materials, and then the slurry is mixed, poured into a mold, and cured. However, it is difficult to control the reaction because, depending on the types of materials, the viscosity can immediately increase after mixing. Slurries of geopolymers easily attach to the agitating wing of the mixer and easily clog the plumbing during transportation. Moreover, during long-term storage of solidified wastes containing concentrated radionuclides in a sealed container without vents, the hydrogen concentration in the container increases over time. Therefore, a simple method using as little water as possible is needed. In this work, geopolymer solidification by compression molding was studied. As compared with the usual methods, it provides a simple and stable method for preparing waste for long-term storage. From investigations performed before and after solidification by compression molding, it was shown that the crystal structure changed. From this result, it was concluded that the geopolymer reaction proceeded during compression molding. This method (1) reduces the energy needed for drying, (2) has good workability, (3) reduces the overall volume, and (4) reduces hydrogen generation. (author)
Cloud solution for histopathological image analysis using region of interest based compression.
Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana
2017-07-01
Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.
Methods for determining the carrying capacity of eccentrically compressed concrete elements
Directory of Open Access Journals (Sweden)
Starishko Ivan Nikolaevich
2014-04-01
Full Text Available The author presents the results of calculations of eccentrically compressed elements in the ultimate limit state of bearing capacity, taking into account all possiblestresses in the longitudinal reinforcement from the R to the R , caused by different values of eccentricity longitudinal force. The method of calculation is based on the simultaneous solution of the equilibrium equations of the longitudinal forces and internal forces with the equilibrium equations of bending moments in the ultimate limit state of the normal sections. Simultaneous solution of these equations, as well as additional equations, reflecting the stress-strain limit state elements, leads to the solution of a cubic equation with respect to height of uncracked concrete, or with respect to the carrying capacity. According to the author it is a significant advantage over the existing methods, in which the equilibrium equations using longitudinal forces obtained one value of the height, and the equilibrium equations of bending moments - another. Theoretical studies of the author, in this article and the reasons to calculate specific examples showed that a decrease in the eccentricity of the longitudinal force in the limiting state of eccentrically compressed concrete elements height uncracked concrete height increases, the tension in the longitudinal reinforcement area gradually (not abruptly goes from a state of tension compression, and load-bearing capacity of elements it increases, which is also confirmed by the experimental results. Designed journalist calculations of eccentrically compressed elements for 4 cases of eccentric compression, instead of 2 - as set out in the regulations, fully cover the entire spectrum of possible cases of the stress-strain limit state elements that comply with the European standards for reinforced concrete, in particular Eurocode 2 (2003.
Directory of Open Access Journals (Sweden)
Ling Yongfa
2016-01-01
Full Text Available The paper proposes a mobile control sink node data collection method in the wireless sensor network based on compressive sensing. This method, with regular track, selects the optimal data collection points in the monitoring area via the disc method, calcu-lates the shortest path by using the quantum genetic algorithm, and hence determines the data collection route. Simulation results show that this method has higher network throughput and better energy efficiency, capable of collecting a huge amount of data with balanced energy consumption in the network.
Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method
Energy Technology Data Exchange (ETDEWEB)
Lee, Myoung Keon [Agency for Defense Development, Daejeon (Korea, Republic of); Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)
2016-10-15
This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)
Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method
International Nuclear Information System (INIS)
Lee, Myoung Keon; Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon
2016-01-01
This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)
Key technical issues associated with a method of pulse compression. Final technical report
International Nuclear Information System (INIS)
Hunter, R.O. Jr.
1980-06-01
Key technical issues for angular multiplexing as a method of pulse compression in a 100 KJ KrF laser have been studied. Environmental issues studied include seismic vibrations man-made vibrations, air propagation, turbulence, and thermal gradient-induced density fluctuations. These studies have been incorporated in the design of mirror mounts and an alignment system, both of which are reported. A design study and performance analysis of the final amplifier have been undertaken. The pulse compression optical train has been designed and assessed as to its performance. Individual components are described and analytical relationships between the optical component size, surface quality, damage threshold and final focus properties are derived. The optical train primary aberrations are obtained and a method for aberration minimization is presented. Cost algorithms for the mirrors, mounts, and electrical hardware are integrated into a cost model to determine system costs as a function of pulse length, aperture size, and spot size
International Nuclear Information System (INIS)
Costa, Gustavo Koury
2004-11-01
Although incompressible fluid flows can be regarded as a particular case of a general problem, numerical methods and the mathematical formulation aimed to solve compressible and incompressible flows have their own peculiarities, in such a way, that it is generally not possible to attain both regimes with a single approach. In this work, we start from a typically compressible formulation, slightly modified to make use of pressure variables and, through augmenting the stabilising parameters, we end up with a simplified model which is able to deal with a wide range of flow regimes, from supersonic to low speed gas flows. The resulting methodology is flexible enough to allow for the simulation of liquid flows as well. Examples using conservative and pressure variables are shown and the results are compared to those published in the literature, in order to validate the method. (author)
High-altitude electromagnetic pulse environment over the lossy ground
International Nuclear Information System (INIS)
Xie Yanzhao; Wang Zanji
2003-01-01
The electromagnetic field above ground produced by an incident high-altitude electromagnetic pulse plane wave striking the ground plane was described in this paper in terms of the Fresnel reflection coefficients and the numerical FFT. The pulse reflected from the ground plane always cancel the incident field for the horizontal field component, but the reflected field adds to the incident for the vertical field component. The results of several cases for variations in the observation height, angle of incidence and lossy ground electrical parameters were also presented showing different e-field components above the earth
Theory and Circuit Model for Lossy Coaxial Transmission Line
Energy Technology Data Exchange (ETDEWEB)
Genoni, T. C.; Anderson, C. N.; Clark, R. E.; Gansz-Torres, J.; Rose, D. V.; Welch, Dale Robert
2017-04-01
The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.
Modelling the acoustical response of lossy lamella-crystals
DEFF Research Database (Denmark)
Christensen, Johan; Mortensen, N. Asger; Willatzen, Morten
2014-01-01
The sound propagation properties of lossy lamella-crystals are analysed theoretically utilizing a rig- orous wave expansion formalism and an effective medium approach. We investigate both sup- ported and free-standing crystal slab structures and predict high absorption for a broad range...... of frequencies. A detailed derivation of the formalism is presented, and we show how the results obtained in the subwavelength and superwavelength regimes qualitatively can be reproduced by homogenizing the lamella-crystals. We come to the conclusion that treating this structure within the metamaterial limit...... only makes sense if the crystal filling fraction is sufficiently large to satisfy an effective medium approach....
Electrical properties of spherical dipole antennas with lossy material cores
DEFF Research Database (Denmark)
Hansen, Troels Vejle; Kim, Oleksiy S.; Breinbjerg, Olav
2012-01-01
A spherical magnetic dipole antenna with a linear, isotropic, homogenous, passive, and lossy material core is modeled analytically, and closed form expressions are given for the internally stored magnetic and electric energies, the radiation efficiency, and radiation quality factor. This model...... and all the provided expressions are exact and valid for arbitrary core sizes, permeability, permittivity, electric and magnetic loss tangents. Arbitrary dispersion models for both permeability and permittivity can be applied. In addition, we present an investigation for an antenna of fixed electrical...
Expansion and compression shock wave calculation in pipes with the C.V.M. numerical method
International Nuclear Information System (INIS)
Raymond, P.; Caumette, P.; Le Coq, G.; Libmann, M.
1983-03-01
The Control Variables Method for fluid transients computations has been used to compute expansion and compression shock waves propagations. In this paper, first analytical solutions for shock wave and rarefaction wave propagation are detailed. Then after a rapid description of the C.V.M. technique and its stability and monotonicity properties, we will present some results about standard shock tube problem, reflection of shock wave, finally a comparison between experimental results obtained on the ELF facility and calculations is given
Directory of Open Access Journals (Sweden)
R. Krishnamoorthy
2012-05-01
Full Text Available In this paper, a new lossy to lossless image coding scheme combined with Orthogonal Polynomials Transform and Integer Wavelet Transform is proposed. The Lifting Scheme based Integer Wavelet Transform (LS-IWT is first applied on the image in order to reduce the blocking artifact and memory demand. The Embedded Zero tree Wavelet (EZW subband coding algorithm is used in this proposed work for progressive image coding which achieves efficient bit rate reduction. The computational complexity of lower subband coding of EZW algorithm is reduced in this proposed work with a new integer based Orthogonal Polynomials transform coding. The normalization and mapping are done on the subband of the image for exploiting the subjective redundancy and the zero tree structure is obtained for EZW coding and so the computation complexity is greatly reduced in this proposed work. The experimental results of the proposed technique also show that the efficient bit rate reduction is achieved for both lossy and lossless compression when compared with existing techniques.
Optimization of wavelet decomposition for image compression and feature preservation.
Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T
2003-09-01
A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.
Methodes spectrales paralleles et applications aux simulations de couches de melange compressibles
Male , Jean-Michel; Fezoui , Loula ,
1993-01-01
La resolution des equations de Navier-Stokes en methodes spectrales pour des ecoulements compressibles peut etre assez gourmande en temps de calcul. On etudie donc ici la parallelisation d'un tel algorithme et son implantation sur une machine massivement parallele, la connection-machine CM-2. La methode spectrale s'adapte bien aux exigences du parallelisme massif, mais l'un des outils de base de cette methode, la transformee de Fourier rapide (lorsqu'elle doit etre appliquee sur les deux dime...
Energy Technology Data Exchange (ETDEWEB)
Loose, R. [Klinikum Nuernberg-Nord (Germany). Inst. fuer Diagnostische und Interventionelle Radiologie; Braunschweig, R. [BG Kliniken Bergmannstrost, Halle/Saale (Germany). Klinik fuer Bildgebende Diagnostik und Interventionsradiologie; Kotter, E. [Universitaetsklinikum Freiburg (Germany). Abt. Roentgendiagnostik; Mildenberger, P. [Mainz Univ. (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Simmler, R.; Wucherer, M. [Klinikum Nuernberg (Germany). Inst. fuer Medizinische Physik
2009-01-15
Purpose: Recommendations for lossy compression of digital radiological DICOM images in Germany by means of a consensus conference. The compression of digital radiological images was evaluated in many studies. Even though the results demonstrate full diagnostic image quality of modality-dependent compression between 1:5 and 1:200, there are only a few clinical applications. Materials and Methods: A consensus conference with approx. 80 interested participants (radiology, industry, physics, and agencies) without individual invitation was organized by the working groups AGIT and APT of the German Roentgen Society DRG to determine compression factors without loss of diagnostic image quality for different anatomical regions for CT, CR/DR, MR, RF/XA examinations. The consent level was specified as at least 66 %. Results: For individual modalities the following compression factors were recommended: CT (brain) 1:5, CT (all other applications) 1:8, CR/DR (all applications except mammography) 1:10, CR/DR (mammography) 1:15, MR (all applications) 1:7, RF/XA (fluoroscopy, DSA, cardiac angio) 1:6. The recommended compression ratios are valid for JPEG and JPEG 2000 /Wavelet compressions. Conclusion: The results may be understood as recommendations and indicate limits of compression factors with no expected reduction of diagnostic image quality. They are similar to the current national recommendations for Canada and England. (orig.)
Solving for the capacity of a noisy lossy bosonic channel via the master equation
International Nuclear Information System (INIS)
Qin Tao; Zhao Meisheng; Zhang Yongde
2006-01-01
We discuss the noisy lossy bosonic channel by exploiting master equations. The capacity of the noisy lossy bosonic channel and the criterion for the optimal capacities are derived. Consequently, we verify that master equations can be a tool to study bosonic channels
Directory of Open Access Journals (Sweden)
Cristina Costa
2004-09-01
Full Text Available The paper presents an analysis of the effects of lossy compression algorithms applied to images affected by geometrical distortion. It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel. A distortion measure named quadtree distortion map (QDM able to quantify this aspect is proposed. Furthermore, QDM is exploited to achieve adaptive compression of geometrically distorted pictures, in order to ensure a uniform quality on the final image. Tests are performed using JPEG and JPEG2000 coding standards in order to quantitatively and qualitatively assess the performance of the proposed method.
Compressed sensing of ECG signal for wireless system with new fast iterative method.
Tawfic, Israa; Kayhan, Sema
2015-12-01
Recent experiments in wireless body area network (WBAN) show that compressive sensing (CS) is a promising tool to compress the Electrocardiogram signal ECG signal. The performance of CS is based on algorithms use to reconstruct exactly or approximately the original signal. In this paper, we present two methods work with absence and presence of noise, these methods are Least Support Orthogonal Matching Pursuit (LS-OMP) and Least Support Denoising-Orthogonal Matching Pursuit (LSD-OMP). The algorithms achieve correct support recovery without requiring sparsity knowledge. We derive an improved restricted isometry property (RIP) based conditions over the best known results. The basic procedures are done by observational and analytical of a different Electrocardiogram signal downloaded them from PhysioBankATM. Experimental results show that significant performance in term of reconstruction quality and compression rate can be obtained by these two new proposed algorithms, and help the specialist gathering the necessary information from the patient in less time if we use Magnetic Resonance Imaging (MRI) application, or reconstructed the patient data after sending it through the network. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A compressed sensing based method with support refinement for impulse noise cancelation in DSL
Quadeer, Ahmed Abdul
2013-06-01
This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.
International Nuclear Information System (INIS)
Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.
2010-01-01
Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.
Performance evaluation of emerging JPEGXR compression standard for medical images
International Nuclear Information System (INIS)
Basit, M.A.
2012-01-01
Medical images require loss less compression as a small error due to lossy compression may be considered as a diagnostic error. JPEG XR is the latest image compression standard designed for variety of applications and has a support for lossy and loss less modes. This paper provides in-depth performance evaluation of latest JPEGXR with existing image coding standards for medical images using loss less compression. Various medical images are used for evaluation and ten images of each organ are tested. Performance of JPEGXR is compared with JPEG2000 and JPEGLS using mean square error, peak signal to noise ratio, mean absolute error and structural similarity index. JPEGXR shows improvement of 20.73 dB and 5.98 dB over JPEGLS and JPEG2000 respectively for various test images used in experimentation. (author)
Three dimensional range geometry and texture data compression with space-filling curves.
Chen, Xia; Zhang, Song
2017-10-16
This paper presents a novel method to effectively store three-dimensional (3D) data and 2D texture data into a regular 24-bit image. The proposed method uses the Hilbert space-filling curve to map the normalized unwrapped phase map to two 8-bit color channels, and saves the third color channel for 2D texture storage. By further leveraging existing 2D image and video compression techniques, the proposed method can achieve high compression ratios while effectively preserving data quality. Since the encoding and decoding processes can be applied to most of the current 2D media platforms, this proposed compression method can make 3D data storage and transmission available for many electrical devices without requiring special hardware changes. Experiments demonstrate that if a lossless 2D image/video format is used, both original 3D geometry and 2D color texture can be accurately recovered; if lossy image/video compression is used, only black-and-white or grayscale texture can be properly recovered, but much higher compression ratios (e.g., 1543:1 against the ASCII OBJ format) are achieved with slight loss of 3D geometry quality.
International Nuclear Information System (INIS)
Xu, Yun-Chao; Chen, Qun
2013-01-01
The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases
AN ENCODING METHOD FOR COMPRESSING GEOGRAPHICAL COORDINATES IN 3D SPACE
Directory of Open Access Journals (Sweden)
C. Qian
2017-09-01
Full Text Available This paper proposed an encoding method for compressing geographical coordinates in 3D space. By the way of reducing the length of geographical coordinates, it helps to lessen the storage size of geometry information. In addition, the encoding algorithm subdivides the whole space according to octree rules, which enables progressive transmission and loading. Three main steps are included in this method: (1 subdividing the whole 3D geographic space based on octree structure, (2 resampling all the vertices in 3D models, (3 encoding the coordinates of vertices with a combination of Cube Index Code (CIC and Geometry Code. A series of geographical 3D models were applied to evaluate the encoding method. The results showed that this method reduced the storage size of most test data by 90 % or even more under the condition of a speed of encoding and decoding. In conclusion, this method achieved a remarkable compression rate in vertex bit size with a steerable precision loss. It shall be of positive meaning to the web 3d map storing and transmission.
An improved ghost-cell immersed boundary method for compressible flow simulations
Chi, Cheng
2016-05-20
This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. A sensor is introduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently in the Cartesian grid system. The improved ghost-cell method is validated against four test cases: (a) double Mach reflections on a ramp, (b) smooth Prandtl-Meyer expansion flows, (c) supersonic flows in a wind tunnel with a forward-facing step, and (d) supersonic flows over a circular cylinder. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Copyright © 2016 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Caruso, A.; Mechitoua, N.; Duplex, J.
1995-01-01
The R and D thermal hydraulic codes, notably the finite difference codes Melodie (2D) and ESTET (3D) or the 2D and 3D versions of the finite element code N3S were initially developed for incompressible, possibly dilatable, turbulent flows, i.e. those where density is not pressure-dependent. Subsequent minor modifications to these finite difference code algorithms enabled extension of their scope to subsonic compressible flows. The first applications in both single-phase and two flow contexts have now been completed. This paper presents the techniques used to adapt these algorithms for the processing of compressible flows in an N3S type finite element code, whereby complex geometries normally difficult to model in finite difference meshes could be successfully dealt with. The development of version 3.0 of he N3S code led to dilatable flow calculations at lower cost. On this basis, a 2-D prototype version of N3S was programmed, tested and validated, drawing maximum benefit from Cray vectorization possibilities and from physical, numerical or data processing experience with other fluid dynamics codes, such as Melodie, ESTET or TELEMAC. The algorithms are the same as those used in finite difference codes, but their formulation is variational. The first part of the paper deals with the fundamental equations involved, expressed in basic form, together with the associated digital method. The modifications to the k-epsilon turbulence model extended to compressible flows are also described. THe second part presents the algorithm used, indicating the additional terms required by the extension. The third part presents the equations in integral form and the associated matrix systems. The solutions adopted for calculation of the compressibility related terms are indicated. Finally, a few representative applications and test cases are discussed. These include subsonic, but also transsonic and supersonic cases, showing the shock responses of the digital method. The application of
Yang, Xiaoquan; Cheng, Jian; Liu, Tiegang; Luo, Hong
2015-11-01
The direct discontinuous Galerkin (DDG) method based on a traditional discontinuous Galerkin (DG) formulation is extended and implemented for solving the compressible Navier-Stokes equations on arbitrary grids. Compared to the widely used second Bassi-Rebay (BR2) scheme for the discretization of diffusive fluxes, the DDG method has two attractive features: first, it is simple to implement as it is directly based on the weak form, and therefore there is no need for any local or global lifting operator; second, it can deliver comparable results, if not better than BR2 scheme, in a more efficient way with much less CPU time. Two approaches to perform the DDG flux for the Navier- Stokes equations are presented in this work, one is based on conservative variables, the other is based on primitive variables. In the implementation of the DDG method for arbitrary grid, the definition of mesh size plays a critical role as the formation of viscous flux explicitly depends on the geometry. A variety of test cases are presented to demonstrate the accuracy and efficiency of the DDG method for discretizing the viscous fluxes in the compressible Navier-Stokes equations on arbitrary grids.
Institute of Scientific and Technical Information of China (English)
Song Hui; Wang Zhongmin
2017-01-01
The diversity in the phone placements of different mobile users' dailylife increases the difficulty of recognizing human activities by using mobile phone accelerometer data.To solve this problem,a compressed sensing method to recognize human activities that is based on compressed sensing theory and utilizes both raw mobile phone accelerometer data and phone placement information is proposed.First,an over-complete dictionary matrix is constructed using sufficient raw tri-axis acceleration data labeled with phone placement information.Then,the sparse coefficient is evaluated for the samples that need to be tested by resolving L1 minimization.Finally,residual values are calculated and the minimum value is selected as the indicator to obtain the recognition results.Experimental results show that this method can achieve a recognition accuracy reaching 89.86％,which is higher than that of a recognition method that does not adopt the phone placement information for the recognition process.The recognition accuracy of the proposed method is effective and satisfactory.
A comparative analysis of the cryo-compression and cryo-adsorption hydrogen storage methods
Energy Technology Data Exchange (ETDEWEB)
Petitpas, G [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Benard, P [Universite du Quebec a Trois-Rivieres (Canada); Klebanoff, L E [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Xiao, J [Universite du Quebec a Trois-Rivieres (Canada); Aceves, S M [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2014-07-01
While conventional low-pressure LH₂ dewars have existed for decades, advanced methods of cryogenic hydrogen storage have recently been developed. These advanced methods are cryo-compression and cryo-adsorption hydrogen storage, which operate best in the temperature range 30–100 K. We present a comparative analysis of both approaches for cryogenic hydrogen storage, examining how pressure and/or sorbent materials are used to effectively increase onboard H₂ density and dormancy. We start by reviewing some basic aspects of LH₂ properties and conventional means of storing it. From there we describe the cryo-compression and cryo-adsorption hydrogen storage methods, and then explore the relationship between them, clarifying the materials science and physics of the two approaches in trying to solve the same hydrogen storage task (~5–8 kg H₂, typical of light duty vehicles). Assuming that the balance of plant and the available volume for the storage system in the vehicle are identical for both approaches, the comparison focuses on how the respective storage capacities, vessel weight and dormancy vary as a function of temperature, pressure and type of cryo-adsorption material (especially, powder MOF-5 and MIL-101). By performing a comparative analysis, we clarify the science of each approach individually, identify the regimes where the attributes of each can be maximized, elucidate the properties of these systems during refueling, and probe the possible benefits of a combined “hybrid” system with both cryo-adsorption and cryo-compression phenomena operating at the same time. In addition the relationships found between onboard H₂ capacity, pressure vessel and/or sorbent mass and dormancy as a function of rated pressure, type of sorbent material and fueling conditions are useful as general designing guidelines in future engineering efforts using these two hydrogen storage approaches.
MULTISTAGE BITRATE REDUCTION IN ABSOLUTE MOMENT BLOCK TRUNCATION CODING FOR IMAGE COMPRESSION
Directory of Open Access Journals (Sweden)
S. Vimala
2012-05-01
Full Text Available Absolute Moment Block Truncation Coding (AMBTC is one of the lossy image compression techniques. The computational complexity involved is less and the quality of the reconstructed images is appreciable. The normal AMBTC method requires 2 bits per pixel (bpp. In this paper, two novel ideas have been incorporated as part of AMBTC method to improve the coding efficiency. Generally, the quality degrades with the reduction in the bit-rate. But in the proposed method, the quality of the reconstructed image increases with the decrease in the bit-rate. The proposed method has been tested with standard images like Lena, Barbara, Bridge, Boats and Cameraman. The results obtained are better than that of the existing AMBTC method in terms of bit-rate and the quality of the reconstructed images.
Fujita, Megumi; Himi, Satoshi; Iwata, Motokazu
2010-03-01
SX-3228, 6-benzyl-3-(5-methoxy-1,3,4-oxadiazol-2-yl)-5,6,7,8-tetrahydro-1,6-naphthyridin-2(1H)-one, is a newly-synthesized benzodiazepine receptor agonist intended to be developed as a tablet preparation. This compound, however, becomes chemically unstable due to decreased crystallinity when it undergoes mechanical treatments such as grinding and compression. A wet-granule tableting method, where wet granules are compressed before being dried, was therefore investigated as it has the advantage of producing tablets of sufficient hardness at quite low compression pressures. The results of the stability testing showed that the drug substance was chemically considerably more stable in wet-granule compression tablets compared to conventional tablets. Furthermore, the drug substance was found to be relatively chemically stable in wet-granule compression tablets even when high compression pressure was used and the effect of this pressure was small. After investigating the reason for this excellent stability, it became evident that near-isotropic pressure was exerted on the crystals of the drug substance because almost all the empty spaces in the tablets were occupied with water during the wet-granule compression process. Decreases in crystallinity of the drug substance were thus small, making the drug substance chemically stable in the wet-granule compression tablets. We believe that this novel approach could be useful for many other compounds that are destabilized by mechanical treatments.
Del Pino, S.; Labourasse, E.; Morel, G.
2018-06-01
We present a multidimensional asymptotic preserving scheme for the approximation of a mixture of compressible flows. Fluids are modelled by two Euler systems of equations coupled with a friction term. The asymptotic preserving property is mandatory for this kind of model, to derive a scheme that behaves well in all regimes (i.e. whatever the friction parameter value is). The method we propose is defined in ALE coordinates, using a Lagrange plus remap approach. This imposes a multidimensional definition and analysis of the scheme.
Sizing of Compression Coil Springs Gas Regulators Using Modern Methods CAD and CAE
Directory of Open Access Journals (Sweden)
Adelin Ionel Tuţă
2010-10-01
Full Text Available This paper presents a method for compression coil springs sizing by gas regulators composition, using CAD techniques (Computer Aided Design and CAE (Computer Aided Engineering. Sizing is to optimize the functioning of the regulators under dynamic industrial and house-hold. Gas regulator is a device that automatically and continuously adjusted to maintain pre-set limits on output gas pressure at varying flow and input pressure. The performances of the pressure regulators like automatic systems depend on their behaviour under dynamic opera-tion. Time constant optimization of pneumatic actuators, which drives gas regulators, leads to a better functioning under their dynamic.
Directory of Open Access Journals (Sweden)
Ben-Ami Lipetz
1969-12-01
Full Text Available F. H. Ruecking's word-compression algorithm for retrieval of bibliographic data from computer stores was tested for performance in matching user-supplied, unedited bibliographic data to the bibliographic data contained in a library catalog. The algorithm was tested by manual simulation, using data derived from 126 case studies of successful manual searches of the card catalog at Sterling Memorial Library, Yale University. The algorithm achieved 70% recall in comparison to conventional searching. Its accepta- bility as a substitute for conventional catalog searching methods is ques- tioned unless recall performance can be improved, either by use of the algorithm alone or in combination with other algorithms.
Resolution limits of migration and linearized waveform inversion images in a lossy medium
Schuster, Gerard T.; Dutta, Gaurav; Li, Jing
2017-01-01
The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.
Resolution limits of migration and linearized waveform inversion images in a lossy medium
Schuster, Gerard T.
2017-03-10
The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.
International Nuclear Information System (INIS)
Basov, N.G.; Kologrivov, A.A.; Krokhin, O.N.; Rupasov, A.A.; Shikanov, A.S.
1979-01-01
Three methods are described for a high-speed diagnostics of compression dynamics of shell targets being spherically laser-heated on the installation ''Kal'mar''. The first method is based on the direct investigation of the space-time evolution of the critical-density region for Nd-laser emission (N sub(e) asymptotically equals 10 21 I/cm 3 ) by means of the streak photography of plasma image in the second-harmonic light. The second method involves investigation of time evolution of the second-harmonic spectral distribution by means of a spectrograph coupled with a streak camera. The use of a special laser pulse with two time-distributed intensity maxima for the irradiation of shell targets, and the analysis of the obtained X-ray pin-hole pictures constitute the basis of the third method. (author)
Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery
Directory of Open Access Journals (Sweden)
Lingjun Liu
2017-01-01
Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.
Application specific compression : final report.
Energy Technology Data Exchange (ETDEWEB)
Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.
2008-12-01
With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.
Method for data compression by associating complex numbers with files of data values
Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur
1998-02-10
A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.
A microcontroller-based interface circuit for lossy capacitive sensors
International Nuclear Information System (INIS)
Reverter, Ferran; Casas, Òscar
2010-01-01
This paper introduces and analyses a low-cost microcontroller-based interface circuit for lossy capacitive sensors, i.e. sensors whose parasitic conductance (G x ) is not negligible. Such a circuit relies on a previous circuit also proposed by the authors, in which the sensor is directly connected to a microcontroller without using either a signal conditioner or an analogue-to-digital converter in the signal path. The novel circuit uses the same hardware, but it performs an additional measurement and executes a new calibration technique. As a result, the sensitivity of the circuit to G x decreases significantly (a factor higher than ten), but not completely due to the input capacitances of the port pins of the microcontroller. Experimental results show a relative error in the capacitance measurement below 1% for G x x ) shows the effectiveness of the circuit
Directory of Open Access Journals (Sweden)
F. Sh. Aliev
2015-01-01
Full Text Available Research objective. To prove experimentally the possibility of forming a compression colonic anastomoses using nickel-titanium devices in comparison with traditional methods of anastomosis. Materials and methods. In experimental studies the quality of the compression anastomosis of the colon in comparison with sutured and stapled anastomoses was performed. There were three experimental groups in mongrel dogs formed: in the 1st series (n = 30 compression anastomoses nickel-titanium implants were formed; in the 2nd (n = 25 – circular stapling anastomoses; in the 3rd (n = 25 – ligature way to Mateshuk– Lambert. In the experiment the physical durability, elasticity, and biological tightness, morphogenesis colonic anastomoses were studied. Results. Optimal sizes of compression devices are 32 × 18 and 28 × 15 mm with a wire diameter of 2.2 mm, the force of winding compression was 740 ± 180 g/mm2. Compression suture has a higher physical durability compared to stapled (W = –33.0; p < 0.05 and sutured (W = –28.0; p < 0.05, higher elasticity (p < 0.05 in all terms of tests and biological tightness since 3 days (p < 0.001 after surgery. The regularities of morphogenesis colonic anastomoses allocated by 4 periods of the regeneration of intestinal suture. Conclusion. Obtained experimental data of the use of compression anastomosis of the colon by the nickel-titanium devices are the convincing arguments for their clinical application.
A Schur complement method for compressible two-phase flow models
International Nuclear Information System (INIS)
Dao, Thu-Huyen; Ndjinga, Michael; Magoules, Frederic
2014-01-01
In this paper, we will report our recent efforts to apply a Schur complement method for nonlinear hyperbolic problems. We use the finite volume method and an implicit version of the Roe approximate Riemann solver. With the interface variable introduced in [4] in the context of single phase flows, we are able to simulate two-fluid models ([12]) with various schemes such as upwind, centered or Rusanov. Moreover, we introduce a scaling strategy to improve the condition number of both the interface system and the local systems. Numerical results for the isentropic two-fluid model and the compressible Navier-Stokes equations in various 2D and 3D configurations and various schemes show that our method is robust and efficient. The scaling strategy considerably reduces the number of GMRES iterations in both interface system and local system resolutions. Comparisons of performances with classical distributed computing with up to 218 processors are also reported. (authors)
International Nuclear Information System (INIS)
Grohs, J.G.; Krepler, P.
2004-01-01
Minimal invasive stabilizations represent a new alternative for the treatment of osteoporotic compression fractures. Vertebroplasty and balloon kyphoplasty are two methods to enhance the strength of osteoporotic vertebral bodies by the means of cement application. Vertebroplasty is the older and technically easier method. The balloon kyphoplasty is the newer and more expensive method which does not only improve pain but also restores the sagittal profile of the spine. By balloon kyphoplasty the height of 101 fractured vertebral bodies could be increased up to 90% and the wedge decreased from 12 to 7 degrees. Pain was reduced from 7,2 to 2,5 points. The Oswestry disability index decreased from 60 to 26 points. This effects persisted over a period of two years. Cement leakage occurred in only 2% of vertebral bodies. Fractures of adjacent vertebral bodies were found in 11%. Good preinterventional diagnostics and intraoperative imaging are necessary to make the balloon kyphoplasty a successful application. (orig.) [de
Linearly decoupled energy-stable numerical methods for multi-component two-phase compressible flow
Kou, Jisheng
2017-12-06
In this paper, for the first time we propose two linear, decoupled, energy-stable numerical schemes for multi-component two-phase compressible flow with a realistic equation of state (e.g. Peng-Robinson equation of state). The methods are constructed based on the scalar auxiliary variable (SAV) approaches for Helmholtz free energy and the intermediate velocities that are designed to decouple the tight relationship between velocity and molar densities. The intermediate velocities are also involved in the discrete momentum equation to ensure a consistency relationship with the mass balance equations. Moreover, we propose a component-wise SAV approach for a multi-component fluid, which requires solving a sequence of linear, separate mass balance equations. We prove that the methods have the unconditional energy-dissipation feature. Numerical results are presented to verify the effectiveness of the proposed methods.
Directory of Open Access Journals (Sweden)
Huai-Shuai Shang
2012-01-01
Full Text Available An experimental study of C20, C25, C30, C40, and C50 big mobility concrete cubes that came from laboratory and construction site was completed. Nondestructive testing (NDT was carried out using impact rebound hammer (IRH techniques to establish a correlation between the compressive strengths and the rebound number. The local curve for measuring strength of the regression method is set up and its superiority is proved. The rebound method presented is simple, quick, and reliable and covers wide ranges of concrete strengths. The rebound method can be easily applied to concrete specimens as well as existing concrete structures. The final results were compared with previous ones from the literature and also with actual results obtained from samples extracted from existing structures.
A Space-Frequency Data Compression Method for Spatially Dense Laser Doppler Vibrometer Measurements
Directory of Open Access Journals (Sweden)
José Roberto de França Arruda
1996-01-01
Full Text Available When spatially dense mobility shapes are measured with scanning laser Doppler vibrometers, it is often impractical to use phase-separation modal parameter estimation methods due to the excessive number of highly coupled modes and to the prohibitive computational cost of processing huge amounts of data. To deal with this problem, a data compression method using Chebychev polynomial approximation in the frequency domain and two-dimensional discrete Fourier series approximation in the spatial domain, is proposed in this article. The proposed space-frequency regressive approach was implemented and verified using a numerical simulation of a free-free-free-free suspended rectangular aluminum plate. To make the simulation more realistic, the mobility shapes were synthesized by modal superposition using mode shapes obtained experimentally with a scanning laser Doppler vibrometer. A reduced and smoothed model, which takes advantage of the sinusoidal spatial pattern of structural mobility shapes and the polynomial frequency-domain pattern of the mobility shapes, is obtained. From the reduced model, smoothed curves with any desired frequency and spatial resolution can he produced whenever necessary. The procedure can he used either to generate nonmodal models or to compress the measured data prior to modal parameter extraction.
Compressing climate model simulations: reducing storage burden while preserving information
Hammerling, Dorit; Baker, Allison; Xu, Haiying; Clyne, John; Li, Samuel
2017-04-01
Climate models, which are run at high spatial and temporal resolutions, generate massive quantities of data. As our computing capabilities continue to increase, storing all of the generated data is becoming a bottleneck, which negatively affects scientific progress. It is thus important to develop methods for representing the full datasets by smaller compressed versions, which still preserve all the critical information and, as an added benefit, allow for faster read and write operations during analysis work. Traditional lossy compression algorithms, as for example used for image files, are not necessarily ideally suited for climate data. While visual appearance is relevant, climate data has additional critical features such as the preservation of extreme values and spatial and temporal gradients. Developing alternative metrics to quantify information loss in a manner that is meaningful to climate scientists is an ongoing process still in its early stages. We will provide an overview of current efforts to develop such metrics to assess existing algorithms and to guide the development of tailored compression algorithms to address this pressing challenge.
Schlieren method diagnostics of plasma compression in front of coaxial gun
International Nuclear Information System (INIS)
Kravarik, J.; Kubes, P.; Hruska, J.; Bacilek, J.
1983-01-01
The schlieren method employing a movable knife edge placed in the focal plane of a laser beam was used for the diagnostics of plasma produced by a coaxial plasma gun. When compared with the interferometric method reported earlier, spatial resolution was improved by more than one order of magnitude. In the determination of electron density near the gun orifice, spherical symmetry of the current sheath inhomogeneities and cylindrical symmetry of the compression maximum were assumed. Radial variation of electron density could be reconstructed from the photometric measurements of the transversal variation of schlieren light intensity. Due to small plasma dimensions, electron density was determined directly from the knife edge shift necessary for shadowing the corresponding part of the picture. (J.U.)
An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry
Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi
2016-01-01
We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.
An Embedded Ghost-Fluid Method for Compressible Flow in Complex Geometry
Almarouf, Mohamad Abdulilah Alhusain Alali
2016-06-03
We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. The PDE multidimensional extrapolation approach of Aslam [1] is used to reconstruct the solution in the ghost-fluid regions and impose boundary conditions at the fluid-solid interface. The CNS equations are numerically solved by the second order multidimensional upwind method of Colella [2] and Saltzman [3]. Block-structured adaptive mesh refinement implemented under the Chombo framework is utilized to reduce the computational cost while keeping high-resolution mesh around the embedded boundary and regions of high gradient solutions. Numerical examples with different Reynolds numbers for low and high Mach number flow will be presented. We compare our simulation results with other reported experimental and computational results. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well. © 2016 Trans Tech Publications.
Applicability of higher-order TVD method to low mach number compressible flows
International Nuclear Information System (INIS)
Akamatsu, Mikio
1995-01-01
Steep gradients of fluid density are the influential factor of spurious oscillation in numerical solutions of low Mach number (M<<1) compressible flows. The total variation diminishing (TVD) scheme is a promising remedy to overcome this problem and obtain accurate solutions. TVD schemes for high-speed flows are, however, not compatible with commonly used methods in low Mach number flows using pressure-based formulation. In the present study a higher-order TVD scheme is constructed on a modified form of each individual scalar equation of primitive variables. It is thus clarified that the concept of TVD is applicable to low Mach number flows within the framework of the existing numerical method. Results of test problems of the moving interface of two-component gases with the density ratio ≥ 4, demonstrate the accurate and robust (wiggle-free) profile of the scheme. (author)
A multiscale method for compressible liquid-vapor flow with surface tension*
Directory of Open Access Journals (Sweden)
Jaegle Felix
2013-01-01
Full Text Available Discontinuous Galerkin methods have become a powerful tool for approximating the solution of compressible flow problems. Their direct use for two-phase flow problems with phase transformation is not straightforward because this type of flows requires a detailed tracking of the phase front. We consider the fronts in this contribution as sharp interfaces and propose a novel multiscale approach. It combines an efficient high-order Discontinuous Galerkin solver for the computation in the bulk phases on the macro-scale with the use of a generalized Riemann solver on the micro-scale. The Riemann solver takes into account the effects of moderate surface tension via the curvature of the sharp interface as well as phase transformation. First numerical experiments in three space dimensions underline the overall performance of the method.
Energy Technology Data Exchange (ETDEWEB)
Rahman, M.; Rautaheimo, P.; Siikonen, T.
1997-12-31
A numerical investigation is carried out to predict the turbulent fluid flow and heat transfer characteristics of two-dimensional single and three impinging slot jets. Two low-Reynolds-number {kappa}-{epsilon} models, namely the classical model of Chien and the explicit algebraic stress model of Gatski and Speziale, are considered in the simulation. A cell-centered finite-volume scheme combined with an artificial compressibility approach is employed to solve the flow equations, using a diagonally dominant alternating direction implicit (DDADI) time integration method. A fully upwinded second order spatial differencing is adopted to approximate the convective terms. Roe`s damping term is used to calculate the flux on the cell face. A multigrid method is utilized for the acceleration of convergence. On average, the heat transfer coefficients predicted by both models show good agreement with the experimental results. (orig.) 17 refs.
Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias
2012-06-01
Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.
A practical discrete-adjoint method for high-fidelity compressible turbulence simulations
International Nuclear Information System (INIS)
Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.
2015-01-01
Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space–time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge–Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that
Bresch, D.; Fernández-Nieto, E. D.; Ionescu, I. R.; Vigneaux, P.
In this paper we propose a well-balanced finite volume/augmented Lagrangian method for compressible visco-plastic models focusing on a compressible Bingham type system with applications to dense avalanches. For the sake of completeness we also present a method showing that such a system may be derived for a shallow flow of a rigid-viscoplastic incompressible fluid, namely for incompressible Bingham type fluid with free surface. When the fluid is relatively shallow and spreads slowly, lubrication-style asymptotic approximations can be used to build reduced models for the spreading dynamics, see for instance [N.J. Balmforth et al., J. Fluid Mech (2002)]. When the motion is a little bit quicker, shallow water theory for non-Newtonian flows may be applied, for instance assuming a Navier type boundary condition at the bottom. We start from the variational inequality for an incompressible Bingham fluid and derive a shallow water type system. In the case where Bingham number and viscosity are set to zero we obtain the classical Shallow Water or Saint-Venant equations obtained for instance in [J.F. Gerbeau, B. Perthame, DCDS (2001)]. For numerical purposes, we focus on the one-dimensional in space model: We study associated static solutions with sufficient conditions that relate the slope of the bottom with the Bingham number and domain dimensions. We also propose a well-balanced finite volume/augmented Lagrangian method. It combines well-balanced finite volume schemes for spatial discretization with the augmented Lagrangian method to treat the associated optimization problem. Finally, we present various numerical tests.
Methods for compressible fluid simulation on GPUs using high-order finite differences
Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer
2017-08-01
We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.
Scaling a network with positive gains to a lossy or gainy network
Koene, J.
1979-01-01
Necessary and sufficient conditions are presented under which it is possible to scale a network with positive gains to a lossy or a gainy network. A procedure to perform such a scaling operation is given.
How useful is slow light in enhancing nonlinear interactions in lossy periodic nanostructures?
DEFF Research Database (Denmark)
Saravi, Sina; Quintero-Bermudez, Rafael; Setzpfandt, Frank
2016-01-01
We investigate analytically, and with nonlinear simulations, the extent of usefulness of slow light for enhancing the efficiency of second harmonic generation in lossy nanostructures, and find that the slower is not always the better....
Lossi legendist, tamme teekonnast ja maali mõistatusest / Alar Läänelaid
Läänelaid, Alar, 1951-
2006-01-01
Dendrokronoloogilisest dateerimisest (aastarõngasdateerimine), mis on toonud selgust Alatskivi lossi söögisaali laekaunistuse dateerimisse, Hans van Esseni maali "Vaikelu homaariga" ning Clara Peetersi "Vaikelu jahisaagiga" valmimisaja ja ehtsuse kindlaksmääramisse
Diffraction of an inhomogeneous plane wave by an impedance wedge in a lossy medium
CSIR Research Space (South Africa)
Manara, G
1998-11-01
Full Text Available The diffraction of an inhomogeneous plane wave by an impedance wedge embedded in a lossy medium is analyzed. The rigorous integral representation for the field is asymptotically evaluated in the context of the uniform geometrical theory...
A method for predicting the impact velocity of a projectile fired from a compressed air gun facility
International Nuclear Information System (INIS)
Attwood, G.J.
1988-03-01
This report describes the development and use of a method for calculating the velocity at impact of a projectile fired from a compressed air gun. The method is based on a simple but effective approach which has been incorporated into a computer program. The method was developed principally for use with the Horizontal Impact Facility at AEE Winfrith but has been adapted so that it can be applied to any compressed air gun of a similar design. The method has been verified by comparison of predicted velocities with test data and the program is currently being used in a predictive manner to specify test conditions for the Horizontal Impact Facility at Winfrith. (author)
Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.
2016-01-01
An immersed boundary method for the compressible Navier-Stokes equation and the additional infrastructure that is needed to solve moving boundary problems and fully coupled fluid-structure interaction is described. All the methods described in this paper were implemented in NASA's LAVA solver framework. The underlying immersed boundary method is based on the locally stabilized immersed boundary method that was previously introduced by the authors. In the present paper this method is extended to account for all aspects that are involved for fluid structure interaction simulations, such as fast geometry queries and stencil computations, the treatment of freshly cleared cells, and the coupling of the computational fluid dynamics solver with a linear structural finite element method. The current approach is validated for moving boundary problems with prescribed body motion and fully coupled fluid structure interaction problems in 2D and 3D. As part of the validation procedure, results from the second AIAA aeroelastic prediction workshop are also presented. The current paper is regarded as a proof of concept study, while more advanced methods for fluid structure interaction are currently being investigated, such as geometric and material nonlinearities, and advanced coupling approaches.
Subramanian, Ramanathan Vishnampet Ganapathi
Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvement. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs. Such methods have enabled sensitivity analysis and active control of turbulence at engineering flow conditions by providing gradient information at computational cost comparable to that of simulating the flow. They accelerate convergence of numerical design optimization algorithms, though this is predicated on the availability of an accurate gradient of the discretized flow equations. This is challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. We analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space--time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge--Kutta-like scheme
Macron Formed Liner Compression as a Practical Method for Enabling Magneto-Inertial Fusion
Energy Technology Data Exchange (ETDEWEB)
Slough, John
2011-12-10
The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. The main impediment for current nuclear fusion concepts is the complexity and large mass associated with the confinement systems. To take advantage of the smaller scale, higher density regime of magnetic fusion, an efficient method for achieving the compressional heating required to reach fusion gain conditions must be found. The very compact, high energy density plasmoid commonly referred to as a Field Reversed Configuration (FRC) provides for an ideal target for this purpose. To make fusion with the FRC practical, an efficient method for repetitively compressing the FRC to fusion gain conditions is required. A novel approach to be explored in this endeavor is to remotely launch a converging array of small macro-particles (macrons) that merge and form a more massive liner inside the reactor which then radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target FRC plasmoid suppresses the thermal transport to the confining liner significantly lowering the imploding power needed to compress the target. With the momentum flux being delivered by an assemblage of low mass, but high velocity macrons, many of the difficulties encountered with the liner implosion power technology are eliminated. The undertaking to be described in this proposal is to evaluate the feasibility achieving fusion conditions from this simple and low cost approach to fusion. During phase I the design and testing of the key components for the creation of the macron formed liner have been successfully carried out. Detailed numerical calculations of the merging, formation and radial implosion of the Macron Formed Liner (MFL) were also performed. The phase II effort will focus on an experimental demonstration of the macron launcher at full power, and the demonstration
Standard test method for compressive (crushing) strength of fired whiteware materials
American Society for Testing and Materials. Philadelphia
2006-01-01
1.1 This test method covers two test procedures (A and B) for the determination of the compressive strength of fired whiteware materials. 1.2 Procedure A is generally applicable to whiteware products of low- to moderately high-strength levels (up to 150 000 psi or 1030 MPa). 1.3 Procedure B is specifically devised for testing of high-strength ceramics (over 100 000 psi or 690 MPa). 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Douglas, David R; Tennant, Christopher
2015-11-10
A modulated-bending recirculating system that avoids CSR-driven breakdown in emittance compensation by redistributing the bending along the beamline. The modulated-bending recirculating system includes a) larger angles of bending in initial FODO cells, thereby enhancing the impact of CSR early on in the beam line while the bunch is long, and 2) a decreased bending angle in the final FODO cells, reducing the effect of CSR while the bunch is short. The invention describes a method for controlling the effects of CSR during recirculation and bunch compression including a) correcting chromatic aberrations, b) correcting lattice and CSR-induced curvature in the longitudinal phase space by compensating T.sub.566, and c) using lattice perturbations to compensate obvious linear correlations x-dp/p and x'-dp/p.
A New Algorithm for the On-Board Compression of Hyperspectral Images
Directory of Open Access Journals (Sweden)
Raúl Guerra
2018-03-01
Full Text Available Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA, is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.
Kandemir, Ekrem; Borekci, Selim; Cetin, Numan S.
2018-04-01
Photovoltaic (PV) power generation has been widely used in recent years, with techniques for increasing the power efficiency representing one of the most important issues. The available maximum power of a PV panel is dependent on environmental conditions such as solar irradiance and temperature. To extract the maximum available power from a PV panel, various maximum-power-point tracking (MPPT) methods are used. In this work, two different MPPT methods were implemented for a 150-W PV panel. The first method, known as incremental conductance (Inc. Cond.) MPPT, determines the maximum power by measuring the derivative of the PV voltage and current. The other method is based on reduced-rule compressed fuzzy logic control (RR-FLC), using which it is relatively easier to determine the maximum power because a single input variable is used to reduce computing loads. In this study, a 150-W PV panel system model was realized using these MPPT methods in MATLAB and the results compared. According to the simulation results, the proposed RR-FLC-based MPPT could increase the response rate and tracking accuracy by 4.66% under standard test conditions.
Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.
2018-01-01
Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.
Energy Technology Data Exchange (ETDEWEB)
Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Benton, Nathanael [Nexant, Inc., San Francisco, CA (United States); Burns, Patrick [Nexant, Inc., San Francisco, CA (United States)
2017-10-18
Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressor replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.
International Nuclear Information System (INIS)
Petitpas, Fabien; Franquet, Erwin; Saurel, Richard; Le Metayer, Olivier
2007-01-01
The relaxation-projection method developed in Saurel et al. [R. Saurel, E. Franquet, E. Daniel, O. Le Metayer, A relaxation-projection method for compressible flows. Part I: The numerical equation of state for the Euler equations, J. Comput. Phys. (2007) 822-845] is extended to the non-conservative hyperbolic multiphase flow model of Kapila et al. [A.K. Kapila, Menikoff, J.B. Bdzil, S.F. Son, D.S. Stewart, Two-phase modeling of deflagration to detonation transition in granular materials: reduced equations, Physics of Fluids 13(10) (2001) 3002-3024]. This model has the ability to treat multi-temperatures mixtures evolving with a single pressure and velocity and is particularly interesting for the computation of interface problems with compressible materials as well as wave propagation in heterogeneous mixtures. The non-conservative character of this model poses however computational challenges in the presence of shocks. The first issue is related to the Riemann problem resolution that necessitates shock jump conditions. Thanks to the Rankine-Hugoniot relations proposed and validated in Saurel et al. [R. Saurel, O. Le Metayer, J. Massoni, S. Gavrilyuk, Shock jump conditions for multiphase mixtures with stiff mechanical relaxation, Shock Waves 16 (3) (2007) 209-232] exact and approximate 2-shocks Riemann solvers are derived. However, the Riemann solver is only a part of a numerical scheme and non-conservative variables pose extra difficulties for the projection or cell average of the solution. It is shown that conventional Godunov schemes are unable to converge to the exact solution for strong multiphase shocks. This is due to the incorrect partition of the energies or entropies in the cell averaged mixture. To circumvent this difficulty a specific Lagrangian scheme is developed. The correct partition of the energies is achieved by using an artificial heat exchange in the shock layer. With the help of an asymptotic analysis this heat exchange takes a similar form as
Petitpas, Fabien; Franquet, Erwin; Saurel, Richard; Le Metayer, Olivier
2007-08-01
The relaxation-projection method developed in Saurel et al. [R. Saurel, E. Franquet, E. Daniel, O. Le Metayer, A relaxation-projection method for compressible flows. Part I: The numerical equation of state for the Euler equations, J. Comput. Phys. (2007) 822-845] is extended to the non-conservative hyperbolic multiphase flow model of Kapila et al. [A.K. Kapila, Menikoff, J.B. Bdzil, S.F. Son, D.S. Stewart, Two-phase modeling of deflagration to detonation transition in granular materials: reduced equations, Physics of Fluids 13(10) (2001) 3002-3024]. This model has the ability to treat multi-temperatures mixtures evolving with a single pressure and velocity and is particularly interesting for the computation of interface problems with compressible materials as well as wave propagation in heterogeneous mixtures. The non-conservative character of this model poses however computational challenges in the presence of shocks. The first issue is related to the Riemann problem resolution that necessitates shock jump conditions. Thanks to the Rankine-Hugoniot relations proposed and validated in Saurel et al. [R. Saurel, O. Le Metayer, J. Massoni, S. Gavrilyuk, Shock jump conditions for multiphase mixtures with stiff mechanical relaxation, Shock Waves 16 (3) (2007) 209-232] exact and approximate 2-shocks Riemann solvers are derived. However, the Riemann solver is only a part of a numerical scheme and non-conservative variables pose extra difficulties for the projection or cell average of the solution. It is shown that conventional Godunov schemes are unable to converge to the exact solution for strong multiphase shocks. This is due to the incorrect partition of the energies or entropies in the cell averaged mixture. To circumvent this difficulty a specific Lagrangian scheme is developed. The correct partition of the energies is achieved by using an artificial heat exchange in the shock layer. With the help of an asymptotic analysis this heat exchange takes a similar form as
Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing
Directory of Open Access Journals (Sweden)
Yang Jun
2016-02-01
Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.
Taylor, Ellen Meredith
Weighted essentially non-oscillatory (WENO) methods have been developed to simultaneously provide robust shock-capturing in compressible fluid flow and avoid excessive damping of fine-scale flow features such as turbulence. This is accomplished by constructing multiple candidate numerical stencils that adaptively combine so as to provide high order of accuracy and high bandwidth-resolving efficiency in continuous flow regions while averting instability-provoking interpolation across discontinuities. Under certain conditions in compressible turbulence, however, numerical dissipation remains unacceptably high even after optimization of the linear optimal stencil combination that dominates in smooth regions. The remaining nonlinear error arises from two primary sources: (i) the smoothness measurement that governs the application of adaptation away from the optimal stencil and (ii) the numerical properties of individual candidate stencils that govern numerical accuracy when adaptation engages. In this work, both of these sources are investigated, and corrective modifications to the WENO methodology are proposed and evaluated. Excessive nonlinear error due to the first source is alleviated through two separately considered procedures appended to the standard smoothness measurement technique that are designated the "relative smoothness limiter" and the "relative total variation limiter." In theory, appropriate values of their associated parameters should be insensitive to flow configuration, thereby sidestepping the prospect of costly parameter tuning; and this expectation of broad effectiveness is assessed in direct numerical simulations (DNS) of one-dimensional inviscid test problems, three-dimensional compressible isotropic turbulence of varying Reynolds and turbulent Mach numbers, and shock/isotropic-turbulence interaction (SITI). In the process, tools for efficiently comparing WENO adaptation behavior in smooth versus shock-containing regions are developed. The
IPTV multicast with peer-assisted lossy error control
Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd
2010-07-01
Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.
Qin, W.; Yin, J.; Yao, H.
2013-12-01
On May 24th 2013 a Mw 8.3 normal faulting earthquake occurred at a depth of approximately 600 km beneath the sea of Okhotsk, Russia. It is a rare mega earthquake that ever occurred at such a great depth. We use the time-domain iterative backprojection (IBP) method [1] and also the frequency-domain compressive sensing (CS) technique[2] to investigate the rupture process and energy radiation of this mega earthquake. We currently use the teleseismic P-wave data from about 350 stations of USArray. IBP is an improved method of the traditional backprojection method, which more accurately locates subevents (energy burst) during earthquake rupture and determines the rupture speeds. The total rupture duration of this earthquake is about 35 s with a nearly N-S rupture direction. We find that the rupture is bilateral in the beginning 15 seconds with slow rupture speeds: about 2.5km/s for the northward rupture and about 2 km/s for the southward rupture. After that, the northward rupture stopped while the rupture towards south continued. The average southward rupture speed between 20-35 s is approximately 5 km/s, lower than the shear wave speed (about 5.5 km/s) at the hypocenter depth. The total rupture length is about 140km, in a nearly N-S direction, with a southward rupture length about 100 km and a northward rupture length about 40 km. We also use the CS method, a sparse source inversion technique, to study the frequency-dependent seismic radiation of this mega earthquake. We observe clear along-strike frequency dependence of the spatial and temporal distribution of seismic radiation and rupture process. The results from both methods are generally similar. In the next step, we'll use data from dense arrays in southwest China and also global stations for further analysis in order to more comprehensively study the rupture process of this deep mega earthquake. Reference [1] Yao H, Shearer P M, Gerstoft P. Subevent location and rupture imaging using iterative backprojection for
A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry
Almarouf, Mohamad Abdulilah Alhusain Alali
2017-02-25
We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.
A versatile embedded boundary adaptive mesh method for compressible flow in complex geometry
Almarouf, Mohamad Abdulilah Alhusain Alali; Samtaney, Ravi
2017-01-01
We present an embedded ghost-fluid method for numerical solutions of the compressible Navier Stokes (CNS) equations in arbitrary complex domains. A PDE multidimensional extrapolation approach is used to reconstruct the solution in the ghost-fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. The CNS equations are numerically solved by the second order multidimensional upwind method. Block-structured adaptive mesh refinement, implemented with the Chombo framework, is utilized to reduce the computational cost while keeping high resolution mesh around the embedded boundary and regions of high gradient solutions. The versatility of the method is demonstrated via several numerical examples, in both static and moving geometry, ranging from low Mach number nearly incompressible flows to supersonic flows. Our simulation results are extensively verified against other numerical results and validated against available experimental results where applicable. The significance and advantages of our implementation, which revolve around balancing between the solution accuracy and implementation difficulties, are briefly discussed as well.
Scagliarini, Andrea; Biferale, L.; Sbragaglia, M.; Sugiyama, K.; Toschi, F.
2010-01-01
We compute the continuum thermohydrodynamical limit of a new formulation of lattice kinetic equations for thermal compressible flows, recently proposed by Sbragaglia et al. [J. Fluid Mech. 628, 299 (2009)] . We show that the hydrodynamical manifold is given by the correct compressible
Directory of Open Access Journals (Sweden)
Jidong Wang
2016-01-01
Full Text Available The event-triggered energy-to-peak filtering for polytopic discrete-time linear systems is studied with the consideration of lossy network and quantization error. Because of the communication imperfections from the packet dropout of lossy link, the event-triggered condition used to determine the data release instant at the event generator (EG can not be directly applied to update the filter input at the zero order holder (ZOH when performing filter performance analysis and synthesis. In order to balance such nonuniform time series between the triggered instant of EG and the updated instant of ZOH, two event-triggered conditions are defined, respectively, whereafter a worst-case bound on the number of consecutive packet losses of the transmitted data from EG is given, which marginally guarantees the effectiveness of the filter that will be designed based on the event-triggered updating condition of ZOH. Then, the filter performance analysis conditions are obtained under the assumption that the maximum number of packet losses is allowable for the worst-case bound. In what follows, a two-stage LMI-based alternative optimization approach is proposed to separately design the filter, which reduces the conservatism of the traditional linearization method of filter analysis conditions. Subsequently a codesign algorithm is developed to determine the communication and filter parameters simultaneously. Finally, an illustrative example is provided to verify the validity of the obtained results.
Energy Technology Data Exchange (ETDEWEB)
Mower, T.E.; Higgins, J.D. [Colorado School of Mines, Golden, CO (USA). Dept. of Geology and Geological Engineering; Yang, I.C. [Geological Survey, Denver, CO (USA). Water Resources Div.
1989-12-31
To support the study of hydrologic system in the unsaturated zone at Yucca Mountain, Nevada, two extraction methods were examined to obtain representative, uncontaminated pore-water samples from unsaturated tuff. Results indicate that triaxial compression, which uses a standard cell, can remove pore water from nonwelded tuff that has an initial moisture content greater than 11% by weight; uniaxial compression, which uses a specifically fabricated cell, can extract pore water from nonwelded tuff that has an initial moisture content greater than 8% and from welded tuff that has an initial moisture content greater than 6.5%. For the ambient moisture conditions of Yucca Mountain tuffs, uniaxial compression is the most efficient method of pore-water extraction. 12 refs., 7 figs., 2 tabs.
Lossless medical image compression with a hybrid coder
Way, Jing-Dar; Cheng, Po-Yuen
1998-10-01
The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.
On the Separation of Quantum Noise for Cardiac X-Ray Image Compression
de Bruijn, F.J.; Slump, Cornelis H.
1996-01-01
In lossy medical image compression, the requirements for the preservation of diagnostic integrity cannot be easily formulated in terms of a perceptual model. Especially, since human visual perception is dependent on numerous factors such as the viewing conditions and psycho-visual factors.
A ghost fluid method for sharp interface simulations of compressible multiphase flows
International Nuclear Information System (INIS)
Majidi, Sahand; Afshari, Asghar
2016-01-01
A ghost fluid based computational tool is developed to study a wide range of compressible multiphase flows involving strong shocks and contact discontinuities while accounting for surface tension, viscous stresses and gravitational forces. The solver utilizes constrained reinitialization method to predict the interface configuration at each time step. Surface tension effect is handled via an exact interface Riemann problem solver. Interfacial viscous stresses are approximated by considering continuous velocity and viscous stress across the interface. To assess the performance of the solver several benchmark problems are considered: One-dimensional gas-water shock tube problem, shock-bubble interaction, air cavity collapse in water, underwater explosion, Rayleigh-Taylor Instability, and ellipsoidal drop oscillations. Results obtained from the numerical simulations indicate that the numerical methodology performs reasonably well in predicting flow features and exhibit a very good agreement with prior experimental and numerical observations. To further examine the accuracy of the developed ghost fluid solver, the obtained results are compared to those by a conventional diffuse interface solver. The comparison shows the capability of our ghost fluid method in reproducing the experimentally observed flow characteristics while revealing more details regarding topological changes of the interface.
A ghost fluid method for sharp interface simulations of compressible multiphase flows
Energy Technology Data Exchange (ETDEWEB)
Majidi, Sahand; Afshari, Asghar [University of Tehran, Teheran (Iran, Islamic Republic of)
2016-04-15
A ghost fluid based computational tool is developed to study a wide range of compressible multiphase flows involving strong shocks and contact discontinuities while accounting for surface tension, viscous stresses and gravitational forces. The solver utilizes constrained reinitialization method to predict the interface configuration at each time step. Surface tension effect is handled via an exact interface Riemann problem solver. Interfacial viscous stresses are approximated by considering continuous velocity and viscous stress across the interface. To assess the performance of the solver several benchmark problems are considered: One-dimensional gas-water shock tube problem, shock-bubble interaction, air cavity collapse in water, underwater explosion, Rayleigh-Taylor Instability, and ellipsoidal drop oscillations. Results obtained from the numerical simulations indicate that the numerical methodology performs reasonably well in predicting flow features and exhibit a very good agreement with prior experimental and numerical observations. To further examine the accuracy of the developed ghost fluid solver, the obtained results are compared to those by a conventional diffuse interface solver. The comparison shows the capability of our ghost fluid method in reproducing the experimentally observed flow characteristics while revealing more details regarding topological changes of the interface.
Soni, V.; Hadjadj, A.; Roussel, O.
2017-12-01
In this paper, a fully adaptive multiresolution (MR) finite difference scheme with a time-varying tolerance is developed to study compressible fluid flows containing shock waves in interaction with solid obstacles. To ensure adequate resolution near rigid bodies, the MR algorithm is combined with an immersed boundary method based on a direct-forcing approach in which the solid object is represented by a continuous solid-volume fraction. The resulting algorithm forms an efficient tool capable of solving linear and nonlinear waves on arbitrary geometries. Through a one-dimensional scalar wave equation, the accuracy of the MR computation is, as expected, seen to decrease in time when using a constant MR tolerance considering the accumulation of error. To overcome this problem, a variable tolerance formulation is proposed, which is assessed through a new quality criterion, to ensure a time-convergence solution for a suitable quality resolution. The newly developed algorithm coupled with high-resolution spatial and temporal approximations is successfully applied to shock-bluff body and shock-diffraction problems solving Euler and Navier-Stokes equations. Results show excellent agreement with the available numerical and experimental data, thereby demonstrating the efficiency and the performance of the proposed method.
Diffuse-Interface Capturing Methods for Compressible Two-Phase Flows
Saurel, Richard; Pantano, Carlos
2018-01-01
Simulation of compressible flows became a routine activity with the appearance of shock-/contact-capturing methods. These methods can determine all waves, particularly discontinuous ones. However, additional difficulties may appear in two-phase and multimaterial flows due to the abrupt variation of thermodynamic properties across the interfacial region, with discontinuous thermodynamical representations at the interfaces. To overcome this difficulty, researchers have developed augmented systems of governing equations to extend the capturing strategy. These extended systems, reviewed here, are termed diffuse-interface models, because they are designed to compute flow variables correctly in numerically diffused zones surrounding interfaces. In particular, they facilitate coupling the dynamics on both sides of the (diffuse) interfaces and tend to the proper pure fluid-governing equations far from the interfaces. This strategy has become efficient for contact interfaces separating fluids that are governed by different equations of state, in the presence or absence of capillary effects, and with phase change. More sophisticated materials than fluids (e.g., elastic-plastic materials) have been considered as well.
Research of Block-Based Motion Estimation Methods for Video Compression
Directory of Open Access Journals (Sweden)
Tropchenko Andrey
2016-08-01
Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.
International Nuclear Information System (INIS)
Motte, R.; Braeunig, J.P.; Peybernes, M.
2012-01-01
As the simulation of compressible flows with several materials is essential for applications studied within the CEA-DAM, the authors propose an approach based on finite volumes with centred variables for the resolution of compressible Euler equations. Moreover, they allow materials to slide with respect to each other as it is the case for water and air, for example. A conservation law is written for each material in a hybrid grid, and a condition of contact between materials under the form of fluxes is expressed. It is illustrated by the case of an intense shock propagating in water and interacting with an air bubble which will be strongly deformed and compressed
Lossy transmission line model of hydrofractured well dynamics
Energy Technology Data Exchange (ETDEWEB)
Patzek, T.W. [Department of Materials Science and Mineral Engineering, University of California at Berkeley, Berkeley, CA (United States); De, A. [Earth Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States)
2000-01-01
The real-time detection of hydrofracture growth is crucial to the successful operation of water, CO{sub 2} or steam injection wells in low-permeability reservoirs and to the prevention of subsidence and well failure. In this paper, we describe propagation of very low frequency (1-10 to 100 Hz) Stoneley waves in a fluid-filled wellbore and their interactions with the fundamental wave mode in a vertical hydrofracture. We demonstrate that Stoneley-wave loses energy to the fracture and the energy transfer from the wellbore to the fracture opening is most efficient in soft rocks. We conclude that placing the wave source and receivers beneath the injection packer provides the most efficient means of hydrofracture monitoring. We then present the lossy transmission line model of wellbore and fracture for the detection and characterization of fracture state and volume. We show that this model captures the wellbore and fracture geometry, the physical properties of injected fluid and the wellbore-fracture system dynamics. The model is then compared with experimentally measured well responses. The simulated responses are in good agreement with published experimental data from several water injection wells with depths ranging from 1000 ft to 9000 ft. Hence, we conclude that the transmission line model of water injectors adequately captures wellbore and fracture dynamics. Using an extensive data set for the South Belridge Diatomite waterfloods, we demonstrate that even for very shallow wells the fracture size and state can be adequately recognized at wellhead. Finally, we simulate the effects of hydrofracture extension on the transient response to a pulse signal generated at wellhead. We show that hydrofracture extensions can indeed be detected by monitoring the wellhead pressure at sufficiently low frequencies.
A lossy graph model for delay reduction in generalized instantly decodable network coding
Douik, Ahmed S.
2014-06-01
The problem of minimizing the decoding delay in Generalized instantly decodable network coding (G-IDNC) for both perfect and lossy feedback scenarios is formulated as a maximum weight clique problem over the G-IDNC graph in. In this letter, we introduce a new lossy G-IDNC graph (LG-IDNC) model to further minimize the decoding delay in lossy feedback scenarios. Whereas the G-IDNC graph represents only doubtless combinable packets, the LG-IDNC graph represents also uncertain packet combinations, arising from lossy feedback events, when the expected decoding delay of XORing them among themselves or with other certain packets is lower than that expected when sending these packets separately. We compare the decoding delay performance of LG-IDNC and G-IDNC graphs through extensive simulations. Numerical results show that our new LG-IDNC graph formulation outperforms the G-IDNC graph formulation in all lossy feedback situations and achieves significant improvement in the decoding delay especially when the feedback erasure probability is higher than the packet erasure probability. © 2012 IEEE.
DEFF Research Database (Denmark)
Larsen, Jon Steffen; Santos, Ilmar
2015-01-01
An efficient finite element scheme for solving the non-linear Reynolds equation for compressible fluid coupled to compliant structures is presented. The method is general and fast and can be used in the analysis of airfoil bearings with simplified or complex foil structure models. To illustrate...
Loh, L C; Eg, K P; Puspanathan, P; Tang, S P; Yip, K S; Vijayasingham, P; Thayaparan, T; Kumar, S
2004-03-01
Airway inflammation can be demonstrated by the modem method of sputum induction using ultrasonic nebulizer and hypertonic saline. We studied whether compressed-air nebulizer and isotonic saline which are commonly available and cost less, are as effective in inducing sputum in normal adult subjects as the above mentioned tools. Sixteen subjects underwent weekly sputum induction in the following manner: ultrasonic nebulizer (Medix Sonix 2000, Clement Clarke, UK) using hypertonic saline, ultrasonic nebulizer using isotonic saline, compressed-air nebulizer (BestNeb, Taiwan) using hypertonic saline, and compressed-air nebulizer using isotonic saline. Overall, the use of an ultrasonic nebulizer and hypertonic saline yielded significantly higher total sputum cell counts and a higher percentage of cell viability than compressed-air nebulizers and isotonic saline. With the latter, there was a trend towards squamous cell contaminations. The proportion of various sputum cell types was not significantly different between the groups, and the reproducibility in sputum macrophages and neutrophils was high (Intraclass correlation coefficient, r [95%CI]: 0.65 [0.30-0.91] and 0.58 [0.22-0.89], p compressed-air nebulizers and isotonic saline. We conclude that in normal subjects, although both nebulizers and saline types can induce sputum with reproducible cellular profile, ultrasonic nebulizers and hypertonic saline are more effective but less well tolerated.
Molaeimanesh, G. R.; Nazemian, M.
2017-08-01
Proton exchange membrane (PEM) fuel cells with a great potential for application in vehicle propulsion systems will have a promising future. However, to overcome the exiting challenges against their wider commercialization further fundamental research is inevitable. The effects of gas diffusion layer (GDL) compression on the performance of a PEM fuel cell is not well-recognized; especially, via pore-scale simulation technique capturing the fibrous microstructure of the GDL. In the current investigation, a stochastic microstructure reconstruction method is proposed which can capture GDL microstructure changes by compression. Afterwards, lattice Boltzmann pore-scale simulation technique is adopted to simulate the reactive gas flow through 10 different cathode electrodes with dissimilar carbon paper GDLs produced from five different compression levels and two different carbon fiber diameters. The distributions of oxygen mole fraction, water vapor mole fraction and current density for the simulated cases are presented and analyzed. The results of simulations demonstrate that when the fiber diameter is 9 μm adding compression leads to lower average current density while when the fiber diameter is 7 μm the compression effect is not monotonic.
Directory of Open Access Journals (Sweden)
Nan Ji Jin
2017-01-01
Full Text Available The compressive strength of vinyl ester polymer concrete is predicted using the maturity method. The compressive strength rapidly increased until the curing age of 24 hrs and thereafter slowly increased until the curing age of 72 hrs. As the MMA content increased, the compressive strength decreased. Furthermore, as the curing temperature decreased, compressive strength decreased. For vinyl ester polymer concrete, datum temperature, ranging from −22.5 to −24.6°C, decreased as the MMA content increased. The maturity index equation for cement concrete cannot be applied to polymer concrete and the maturity of vinyl ester polymer concrete can only be estimated through control of the time interval Δt. Thus, this study introduced a suitable scaled-down factor (n for the determination of polymer concrete’s maturity, and a factor of 0.3 was the most suitable. Also, the DR-HILL compressive strength prediction model was determined as applicable to vinyl ester polymer concrete among the dose-response models. For the parameters of the prediction model, applying the parameters by combining all data obtained from the three different amounts of MMA content was deemed acceptable. The study results could be useful for the quality control of vinyl ester polymer concrete and nondestructive prediction of early age strength.
An Improved Ghost-cell Immersed Boundary Method for Compressible Inviscid Flow Simulations
Chi, Cheng
2015-01-01
This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary
An improved ghost-cell immersed boundary method for compressible flow simulations
Chi, Cheng; Lee, Bok Jik; Im, Hong G.
2016-01-01
This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary
Energy Technology Data Exchange (ETDEWEB)
Fechter, Stefan, E-mail: stefan.fechter@iag.uni-stuttgart.de [Institut für Aerodynamik und Gasdynamik, Universität Stuttgart, Pfaffenwaldring 21, 70569 Stuttgart (Germany); Munz, Claus-Dieter, E-mail: munz@iag.uni-stuttgart.de [Institut für Aerodynamik und Gasdynamik, Universität Stuttgart, Pfaffenwaldring 21, 70569 Stuttgart (Germany); Rohde, Christian, E-mail: Christian.Rohde@mathematik.uni-stuttgart.de [Institut für Angewandte Analysis und Numerische Simulation, Universität Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart (Germany); Zeiler, Christoph, E-mail: Christoph.Zeiler@mathematik.uni-stuttgart.de [Institut für Angewandte Analysis und Numerische Simulation, Universität Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart (Germany)
2017-05-01
The numerical approximation of non-isothermal liquid–vapor flow within the compressible regime is a difficult task because complex physical effects at the phase interfaces can govern the global flow behavior. We present a sharp interface approach which treats the interface as a shock-wave like discontinuity. Any mixing of fluid phases is avoided by using the flow solver in the bulk regions only, and a ghost-fluid approach close to the interface. The coupling states for the numerical solution in the bulk regions are determined by the solution of local two-phase Riemann problems across the interface. The Riemann solution accounts for the relevant physics by enforcing appropriate jump conditions at the phase boundary. A wide variety of interface effects can be handled in a thermodynamically consistent way. This includes surface tension or mass/energy transfer by phase transition. Moreover, the local normal speed of the interface, which is needed to calculate the time evolution of the interface, is given by the Riemann solution. The interface tracking itself is based on a level-set method. The focus in this paper is the description of the two-phase Riemann solver and its usage within the sharp interface approach. One-dimensional problems are selected to validate the approach. Finally, the three-dimensional simulation of a wobbling droplet and a shock droplet interaction in two dimensions are shown. In both problems phase transition and surface tension determine the global bulk behavior.
Prasanna Venkatesh, G. J.; Vivek, S. S.; Dhinakaran, G.
2017-07-01
In the majority of civil engineering applications, the basic building blocks were the masonry units. Those masonry units were developed as a monolithic structure by plastering process with the help of binding agents namely mud, lime, cement and their combinations. In recent advancements, the mortar study plays an important role in crack repairs, structural rehabilitation, retrofitting, pointing and plastering operations. The rheology of mortar includes flowable, passing and filling properties which were analogous with the behaviour of self compacting concrete. In self compacting (SC) mortar cubes, the cement was replaced by mineral admixtures namely silica fume (SF) from 5% to 20% (with an increment of 5%), metakaolin (MK) from 10% to 30% (with an increment of 10%) and ground granulated blast furnace slag (GGBS) from 25% to 75% (with an increment of 25%). The ratio between cement and fine aggregate was kept constant as 1: 2 for all normal and self compacting mortar mixes. The accelerated curing namely electric oven curing with the differential temperature of 128°C for the period of 4 hours was adopted. It was found that the compressive strength obtained from the normal and electric oven method of curing was higher for self compacting mortar cubes than normal mortar cube. The cement replacement by 15% SF, 20% MK and 25%GGBS obtained higher strength under both curing conditions.
A spectral element-FCT method for the compressible Euler equations
International Nuclear Information System (INIS)
Giannakouros, J.; Karniadakis, G.E.
1994-01-01
A new algorithm based on spectral element discretizations and flux-corrected transport concepts is developed for the solution of the Euler equations of inviscid compressible fluid flow. A conservative formulation is proposed based on one- and two-dimensional cell-averaging and reconstruction procedures, which employ a staggered mesh of Gauss-Chebyshev and Gauss-Lobatto-Chebyshev collocation points. Particular emphasis is placed on the construction of robust boundary and interfacial conditions in one- and two-dimensions. It is demonstrated through shock-tube problems and two-dimensional simulations that the proposed algorithm leads to stable, non-oscillatory solutions of high accuracy. Of particular importance is the fact that dispersion errors are minimal, as show through experiments. From the operational point of view, casting the method in a spectral element formulation provides flexibility in the discretization, since a variable number of macro-elements or collocation points per element can be employed to accomodate both accuracy and geometric requirements
Hejranfar, Kazem; Parseh, Kaveh
2017-09-01
The preconditioned characteristic boundary conditions based on the artificial compressibility (AC) method are implemented at artificial boundaries for the solution of two- and three-dimensional incompressible viscous flows in the generalized curvilinear coordinates. The compatibility equations and the corresponding characteristic variables (or the Riemann invariants) are mathematically derived and then applied as suitable boundary conditions in a high-order accurate incompressible flow solver. The spatial discretization of the resulting system of equations is carried out by the fourth-order compact finite-difference (FD) scheme. In the preconditioning applied here, the value of AC parameter in the flow field and also at the far-field boundary is automatically calculated based on the local flow conditions to enhance the robustness and performance of the solution algorithm. The code is fully parallelized using the Concurrency Runtime standard and Parallel Patterns Library (PPL) and its performance on a multi-core CPU is analyzed. The incompressible viscous flows around a 2-D circular cylinder, a 2-D NACA0012 airfoil and also a 3-D wavy cylinder are simulated and the accuracy and performance of the preconditioned characteristic boundary conditions applied at the far-field boundaries are evaluated in comparison to the simplified boundary conditions and the non-preconditioned characteristic boundary conditions. It is indicated that the preconditioned characteristic boundary conditions considerably improve the convergence rate of the solution of incompressible flows compared to the other boundary conditions and the computational costs are significantly decreased.
Wang, Ziyin; Liu, Mandan; Cheng, Yicheng; Wang, Rubin
2017-06-01
In this paper, a dynamical recurrent artificial neural network (ANN) is proposed and studied. Inspired from a recent research in neuroscience, we introduced nonsynaptic coupling to form a dynamical component of the network. We mathematically proved that, with adequate neurons provided, this dynamical ANN model is capable of approximating any continuous dynamic system with an arbitrarily small error in a limited time interval. Its extreme concise Jacobian matrix makes the local stability easy to control. We designed this ANN for fitting and forecasting dynamic data and obtained satisfied results in simulation. The fitting performance is also compared with those of both the classic dynamic ANN and the state-of-the-art models. Sufficient trials and the statistical results indicated that our model is superior to those have been compared. Moreover, we proposed a robust approximation problem, which asking the ANN to approximate a cluster of input-output data pairs in large ranges and to forecast the output of the system under previously unseen input. Our model and learning scheme proposed in this paper have successfully solved this problem, and through this, the approximation becomes much more robust and adaptive to noise, perturbation, and low-order harmonic wave. This approach is actually an efficient method for compressing massive external data of a dynamic system into the weight of the ANN.
Boldingh, Anne Marthe; Jensen, Thomas Hagen; Bjørbekk, Ane Torvik; Solevåg, Anne Lee; Nakstad, Britt
2016-10-01
To assess development of objective, subjective and indirect measures of fatigue during simulated infant cardiopulmonary resuscitation (CPR) with two different methods. Using a neonatal manikin, 17 subject-pairs were randomized in a crossover design to provide 5-min CPR with a 3:1 chest compression (CC) to ventilation (C:V) ratio and continuous CCs at a rate of 120 min(-1) with asynchronous ventilations (CCaV-120). We measured participants' changes in heart rate (HR) and mean arterial pressure (MAP); perceived level of fatigue on a validated Likert scale; and manikin CC measures. CCaV-120 compared with a 3:1 C:V ratio resulted in a change during 5-min of CPR in HR 49 versus 40 bpm (p = 0.01), and MAP 1.7 versus -2.8 mmHg (p = 0.03); fatigue rated on a Likert scale 12.9 versus 11.4 (p = 0.2); and a significant decay in CC depth after 90 s (p = 0.03). The results indicate a trend toward more fatigue during simulated CPR in CCaV-120 compared to the recommended 3:1 C:V CPR. These results support current guidelines.
Application of a finite element method to the calculation of compressible subsonic flows
International Nuclear Information System (INIS)
Montagne, J.L.
1980-01-01
The accidental transients in nuclear reactors requires two-phase flow calculation in complicated geometries. In the present case, the flow has been limited to the study of an homogeneous bidimensional flow model. One obtains equations analogous to those for compressible gas. The two-phase nature leads to sudden variations of specific mass as a function of pressure and enthalpy. In practice, the flows are in a transport regime, this is why one has sought a stable discretization scheme for the hyperbolic system of Euler equations. In order to take into account the thermal phenomena, the natural variables were kept, flow rate, pressure enthalpy and the equations were used in their conservative form. A Galerkin method was used to solve the momentum conservation equation. The space to which the flow rates belong is submitted to a matching condition, the normal component of these vectors is continuous at the boundary between elements. The pressures, enthalpy specific mass, in contrast, are discontinuous between two elements. Correspondences must be established between these two type of discretization. The program set into operation uses a discretization of lowest order, and has been conceived for processing time steps conditioned only by the flow speed. It has been tested in two cases where the thermal phenomena are important: cool liquid introduced in vapor, and heating along a plate [fr
Contributions to HEVC Prediction for Medical Image Compression
Guarda, André Filipe Rodrigues
2016-01-01
Medical imaging technology and applications are continuously evolving, dealing with images of increasing spatial and temporal resolutions, which allow easier and more accurate medical diagnosis. However, this increase in resolution demands a growing amount of data to be stored and transmitted. Despite the high coding efficiency achieved by the most recent image and video coding standards in lossy compression, they are not well suited for quality-critical medical image compressi...
Directory of Open Access Journals (Sweden)
Ngamrayu Ngamdokmai
2017-08-01
Full Text Available A herbal compress used in Thai massage has been modified for use in cellulite treatment. Its main active ingredients were ginger, black pepper, java long pepper, tea and coffee. The objective of this study was to develop and validate an HPLCQTOF-MS method for determining its active compounds, i.e., caffeine, 6-gingerol, and piperine in raw materials as well as in the formulation together with the flavouring agent, camphor. The four compounds were chromatographically separated. The analytical method was validated through selectivity, intra-, inter day precision, accuracy and matrix effect. The results showed that the herbal compress contained caffeine (2.16 mg/g, camphor (106.15 mg/g, 6-gingerol (0.76 mg/g, and piperine (4.19 mg/g. The chemical stability study revealed that herbal compresses retained >80% of their active compounds after 1 month of storage at ambient conditions. Our method can be used for quality control of the herbal compress and its raw materials.
International Nuclear Information System (INIS)
Masoud Ziaei-Rad
2010-01-01
In this paper, a two-dimensional numerical scheme is presented for the simulation of turbulent, viscous, transient compressible flows in the simultaneously developing hydraulic and thermal boundary layer region. The numerical procedure is a finite-volume-based finite-element method applied to unstructured grids. This combination together with a new method applied for the boundary conditions allows for accurate computation of the variables in the entrance region and for a wide range of flow fields from subsonic to transonic. The Roe-Riemann solver is used for the convective terms, whereas the standard Galerkin technique is applied for the viscous terms. A modified κ-ε model with a two-layer equation for the near-wall region combined with a compressibility correction is used to predict the turbulent viscosity. Parallel processing is also employed to divide the computational domain among the different processors to reduce the computational time. The method is applied to some test cases in order to verify the numerical accuracy. The results show significant differences between incompressible and compressible flows in the friction coefficient, Nusselt number, shear stress and the ratio of the compressible turbulent viscosity to the molecular viscosity along the developing region. A transient flow generated after an accidental rupture in a pipeline was also studied as a test case. The results show that the present numerical scheme is stable, accurate and efficient enough to solve the problem of transient wall-bounded flow.
Development and validation of dissolution method for carvedilol compression-coated tablets
Directory of Open Access Journals (Sweden)
Ritesh Shah
2011-12-01
Full Text Available The present study describes the development and validation of a dissolution method for carvedilol compression-coated tablets. Dissolution test was performed using a TDT-06T dissolution apparatus. Based on the physiological conditions of the body, 0.1N hydrochloric acid was used as dissolution medium and release was monitored for 2 hours to verify the immediate release pattern of the drug in acidic pH, followed by pH 6.8 in citric-phosphate buffer for 22 hours, to simulate a sustained release pattern in the intestine. Influences of rotation speed and surfactant concentration in medium were evaluated. Samples were analysed by validated UV visible spectrophotometric method at 286 nm. 1% sodium lauryl sulphate (SLS was found to be optimum for improving carvedilol solubility in pH 6.8 citric-phosphate buffer. Analysis of variance showed no significant difference between the results obtained at 50 and 100 rpm. The discriminating dissolution method was successfully developed for carvedilol compression-coated tablets. The conditions that allowed dissolution determination were USP type I apparatus at 100 rpm, containing 1000 ml of 0.1N HCl for 2 hours, followed by pH 6.8 citric-phosphate buffer with 1% SLS for 22 hours at 37.0 ± 0.5 ºC. Samples were analysed by UV spectrophotometric method and validated as per ICH guidelines.O presente estudo descreve o desenvolvimento e a validação de método de dissolução para comprimidos revestidos de carvedilol. O teste de dissolução foi efetuado utilizando-se o aparelho para dissolução TDT-06T. Com base nas condições fisiológicas do organismo, utilizou-se ácido clorídrico 0,1 N como meio de dissolução e a liberação foi monitorada por 2 horas para se verificar o padrão de liberação imediata do fármaco em condições de pH baixo, seguidas por pH 6,8 em tampão cítrico-fosfato por 22 horas, para simular o padrão de liberação controlada no intestino. Avaliou-se a influência da velocidade de
Genetic optimization of magneto-optic Kerr effect in lossy cavity-type magnetophotonic crystals
Energy Technology Data Exchange (ETDEWEB)
Ghanaatshoar, M., E-mail: m-ghanaat@cc.sbu.ac.i [Laser and Plasma Research Institute, Shahid Beheshti University, G.C., Evin 1983963113, Tehran (Iran, Islamic Republic of); Alisafaee, H. [Laser and Plasma Research Institute, Shahid Beheshti University, G.C., Evin 1983963113, Tehran (Iran, Islamic Republic of)
2011-07-15
We have demonstrated an optimization approach in order to obtain desired magnetophotonic crystals (MPCs) composed of a lossy magnetic layer (TbFeCo) placed within a multilayer structure. The approach is an amalgamation between a 4x4 transfer matrix method and a genetic algorithm. Our objective is to enhance the magneto-optic Kerr effect of TbFeCo at short visible wavelength of 405 nm. Through the optimization approach, MPC structures are found meeting definite criteria on the amount of reflectivity and Kerr rotation. The resulting structures are fitted more than 99.9% to optimization criteria. Computation of the internal electric field distribution shows energy localization in the vicinity of the magnetic layer, which is responsible for increased light-matter interaction and consequent enhanced magneto-optic Kerr effect. Versatility of our approach is also exhibited by examining and optimizing several MPC structures. - Research highlights: Structures comprising a highly absorptive TbFeCo layer are designed to work for data storage applications at 405 nm. Optimization algorithm resulted in structures fitted 99.9% to design criteria. More than 10 structures are found exhibiting magneto-optical response of about 1{sup o} rotation and 20% reflection. The ratio of the Kerr rotation to the Kerr ellipticity is enhanced by a factor of 30.
Coherent control of long-distance steady-state entanglement in lossy resonator arrays
Angelakis, D. G.; Dai, L.; Kwek, L. C.
2010-07-01
We show that coherent control of the steady-state long-distance entanglement between pairs of cavity-atom systems in an array of lossy and driven coupled resonators is possible. The cavities are doped with atoms and are connected through waveguides, other cavities or fibers depending on the implementation. We find that the steady-state entanglement can be coherently controlled through the tuning of the phase difference between the driving fields. It can also be surprisingly high in spite of the pumps being classical fields. For some implementations where the connecting element can be a fiber, long-distance steady-state quantum correlations can be established. Furthermore, the maximal of entanglement for any pair is achieved when their corresponding direct coupling is much smaller than their individual couplings to the third party. This effect is reminiscent of the establishment of coherence between otherwise uncoupled atomic levels using classical coherent fields. We suggest a method to measure this entanglement by analyzing the correlations of the emitted photons from the array and also analyze the above results for a range of values of the system parameters, different network geometries and possible implementation technologies.
Thin Foil Acceleration Method for Measuring the Unloading Isentropes of Shock-Compressed Matter
International Nuclear Information System (INIS)
Asay, J.R.; Chhabildas, L.C.; Fortov, V.E.; Kanel, G.I.; Khishchenko, K.V.; Lomonosov, I.V.; Mehlhorn, T.; Razorenov, S.V.; Utkin, A.V.
1999-01-01
This work has been performed as part of the search for possible ways to utilize the capabilities of laser and particle beams techniques in shock wave and equation of state physics. The peculiarity of these techniques is that we have to deal with micron-thick targets and not well reproducible incident shock wave parameters, so all measurements should be of a high resolution and be done in one shot. Besides the Hugoniots, the experimental basis for creating the equations of state includes isentropes corresponding to unloading of shock-compressed matter. Experimental isentrope data are most important in the region of vaporization. With guns or explosive facilities, the unloading isentrope is recovered from a series of experiments where the shock wave parameters in plates of standard low-impedance materials placed behind the sample are measured [1,2]. The specific internal energy and specific volume are calculated from the measured p(u) release curve which corresponds to the Riemann integral. This way is not quite suitable for experiments with beam techniques where the incident shock waves are not well reproducible. The thick foil method [3] provides a few experimental points on the isentrope in one shot. When a higher shock impedance foil is placed on the surface of the material studied, the release phase occurs by steps, whose durations correspond to that for the shock wave to go back and forth in the foil. The velocity during the different steps, connected with the knowledge of the Hugoniot of the foil, allows us to determine a few points on the isentropic unloading curve. However, the method becomes insensitive when the low pressure range of vaporization is reached in the course of the unloading. The isentrope in this region can be measured by recording the smooth acceleration of a thin witness plate foil. With the mass of the foil known, measurements of the foil acceleration will give us the vapor pressure
International Nuclear Information System (INIS)
Moravie, Philippe
1997-01-01
Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr
Branderhorst, Woutjan; de Groot, Jerry E; van Lier, Monique G J T B; Highnam, Ralph P; den Heeten, Gerard J; Grimbergen, Cornelis A
2017-08-01
To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. For a set of 300 breast compressions, we measured the contact areas between breast and paddle, both capacitively using a transparent foil with indium-tin-oxide (ITO) coating attached to the paddle, and retrospectively from the obtained mammograms using image processing software (Volpara Enterprise, algorithm version 1.5.2). A gold standard was obtained from video images of the compressed breast. During each compression, the breast was illuminated from the sides in order to create a dark shadow on the video image where the breast was in contact with the compression paddle. We manually segmented the shadows captured at the time of x-ray exposure and measured their areas. We found a strong correlation between the manual segmentations and the capacitive measurements [r = 0.989, 95% CI (0.987, 0.992)] and between the manual segmentations and the image processing software [r = 0.978, 95% CI (0.972, 0.982)]. Bland-Altman analysis showed a bias of -0.0038 dm 2 for the capacitive measurement (SD 0.0658, 95% limits of agreement [-0.1329, 0.1252]) and -0.0035 dm 2 for the image processing software [SD 0.0962, 95% limits of agreement (-0.1921, 0.1850)]. The size of the contact area between the paddle and the breast can be determined accurately and precisely, both in real-time using the capacitive method, and retrospectively using image processing software. This result is beneficial for scientific research, data analysis and quality control systems that depend on one of these two methods for determining the average pressure on the breast during mammographic compression. © 2017 Sigmascreening B.V. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Directory of Open Access Journals (Sweden)
Yuyang Song
2018-06-01
Full Text Available Its high-specific strength and stiffness with lower cost make discontinuous fiber-reinforced thermoplastic (FRT materials an ideal choice for lightweight applications in the automotive industry. Compression molding is one of the preferred manufacturing processes for such materials as it offers the opportunity to maintain a longer fiber length and higher volume production. In the past, we have demonstrated that compression molding of FRT in bulk form can be simulated by treating melt flow as a continuum using the conservation of mass and momentum equations. However, the compression molding of such materials in sheet form using a similar approach does not work well. The assumption of melt flow as a continuum does not hold for such deformation processes. To address this challenge, we have developed a novel simulation approach. First, the draping of the sheet was simulated as a structural deformation using the explicit finite element approach. Next, the draped shape was compressed using fluid mechanics equations. The proposed method was verified by building a physical part and comparing the predicted fiber orientation and warpage measurements performed on the physical parts. The developed method and tools are expected to help in expediting the development of FRT parts, which will help achieve lightweight targets in the automotive industry.
Energy Technology Data Exchange (ETDEWEB)
Yoon, T.S.; Kim, J.S. [Changwon National University, Changwon (Korea); Lim, Y.H. [Visionite Co., Ltd., Seoul (Korea); Yoo, S.K. [Yonsei University, Seoul (Korea)
2003-05-01
In an emergency telemedicine system such as the High-quality Multimedia based Real-time Emergency Telemedicine(HMRET) service, it is very important to examine the status of the patient continuously using the multimedia data including the biological signals(ECG, BP, Respiration, S{sub p}O{sub 2}) of the patient. In order to transmit these data real time through the communication means which have the limited transmission capacity, it is also necessary to compress the biological data besides other multimedia data. For this purpose, we investigate and compare the ECG compression techniques in the time domain and in the wavelet transform domain, and present an effective lossless compression method of the biological signals using JPEG Huffman table for an emergency telemedicine system. And, for the HMRET service, we developed the lossless compression and reconstruction program of the biological signals in MSVC++ 6.0 using DPCM method and JPEG Huffman table, and tested in an internet environment. (author). 15 refs., 17 figs., 7 tabs.
Directory of Open Access Journals (Sweden)
Konsti Juho
2012-03-01
Full Text Available Abstract Background Digital whole-slide scanning of tissue specimens produces large images demanding increasing storing capacity. To reduce the need of extensive data storage systems image files can be compressed and scaled down. The aim of this article is to study the effect of different levels of image compression and scaling on automated image analysis of immunohistochemical (IHC stainings and automated tumor segmentation. Methods Two tissue microarray (TMA slides containing 800 samples of breast cancer tissue immunostained against Ki-67 protein and two TMA slides containing 144 samples of colorectal cancer immunostained against EGFR were digitized with a whole-slide scanner. The TMA images were JPEG2000 wavelet compressed with four compression ratios: lossless, and 1:12, 1:25 and 1:50 lossy compression. Each of the compressed breast cancer images was furthermore scaled down either to 1:1, 1:2, 1:4, 1:8, 1:16, 1:32, 1:64 or 1:128. Breast cancer images were analyzed using an algorithm that quantitates the extent of staining in Ki-67 immunostained images, and EGFR immunostained colorectal cancer images were analyzed with an automated tumor segmentation algorithm. The automated tools were validated by comparing the results from losslessly compressed and non-scaled images with results from conventional visual assessments. Percentage agreement and kappa statistics were calculated between results from compressed and scaled images and results from lossless and non-scaled images. Results Both of the studied image analysis methods showed good agreement between visual and automated results. In the automated IHC quantification, an agreement of over 98% and a kappa value of over 0.96 was observed between losslessly compressed and non-scaled images and combined compression ratios up to 1:50 and scaling down to 1:8. In automated tumor segmentation, an agreement of over 97% and a kappa value of over 0.93 was observed between losslessly compressed images and
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923
Optimization of the segmented method for optical compression and multiplexing system
Al Falou, Ayman
2002-05-01
Because of the constant increasing demands of images exchange, and despite the ever increasing bandwidth of the networks, compression and multiplexing of images is becoming inseparable from their generation and display. For high resolution real time motion pictures, electronic performing of compression requires complex and time-consuming processing units. On the contrary, by its inherent bi-dimensional character, coherent optics is well fitted to perform such processes that are basically bi-dimensional data handling in the Fourier domain. Additionally, the main limiting factor that was the maximum frame rate is vanishing because of the recent improvement of spatial light modulator technology. The purpose of this communication is to benefit from recent optical correlation algorithms. The segmented filtering used to store multi-references in a given space bandwidth product optical filter can be applied to networks to compress and multiplex images in a given bandwidth channel.
Energy Technology Data Exchange (ETDEWEB)
York, A.R. II [Sandia National Labs., Albuquerque, NM (United States). Engineering and Process Dept.
1997-07-01
The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.
Method and device for the powerful compression of laser-produced plasmas for nuclear fusion
International Nuclear Information System (INIS)
Hora, H.
1975-01-01
According to the invention, more than 10% of the laser energy are converted into mechanical energy of compression, in that the compression is produced by non-linear excessive radiation pressure. The time and local spectral and intensity distribution of the laser pulse must be controlled. The focussed laser beams must increase to over 10 15 W/cm 2 in less than 10 -9 seconds and the time variation of the intensities must be carried out so that the dynamic absorption of the outer plasma corona by rippling consumes less than 90% of the laser energy. (GG) [de
MP3 compression of Doppler ultrasound signals.
Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W
2003-01-01
The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology
International Nuclear Information System (INIS)
Amanifard, N.; Haghighat Namini, V.
2012-01-01
In this study a Modified Compressible Smoothed Particle Hydrodynamics method is introduced which is applicable in problems involving shock wave structures and elastic-plastic deformations of solids. As a matter of fact, algorithm of the method is based on an approach which descritizes the momentum equation into three parts and solves each part separately and calculates their effects on the velocity field and displacement of particles. The most exclusive feature of the method is exactly removing artificial viscosity of the formulations and representing good compatibility with other reasonable numerical methods without any rigorous numerical fractures or tensile instabilities while Modified Compressible Smoothed Particle Hydrodynamics does not use any extra modifications. Two types of problems involving elastic-plastic deformations and shock waves are presented here to demonstrate the capability of Modified Compressible Smoothed Particle Hydrodynamics in simulation of such problems and its ability to capture shock. The problems that are proposed here are low and high velocity impacts between aluminum projectiles and semi infinite aluminum beams. Elastic-perfectly plastic model is chosen for constitutive model of the aluminum and the results of simulations are compared with other reasonable studies in these cases.
A stable penalty method for the compressible Navier-Stokes equations: I. Open boundary conditions
DEFF Research Database (Denmark)
Hesthaven, Jan; Gottlieb, D.
1996-01-01
The purpose of this paper is to present asymptotically stable open boundary conditions for the numerical approximation of the compressible Navier-Stokes equations in three spatial dimensions. The treatment uses the conservation form of the Navier-Stokes equations and utilizes linearization...
Compressed ECG biometric: a fast, secured and efficient method for identification of CVD patient.
Sufi, Fahim; Khalil, Ibrahim; Mahmood, Abdun
2011-12-01
Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of Electrocardiography (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system, if this needs to be done for a trillion of compressed ECG per hour by the hospital. Even though the hospital might be able to come up with an expensive infrastructure to tame the exuberant processing, for small intermediate nodes in a multihop network identification preceded by decompression is confronting. In this paper, we report a technique by which a person can be identified directly from his / her compressed ECG. This technique completely obviates the step of decompression and therefore upholds biometric identification less intimidating for the smaller nodes in a multihop network. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina etc. (up to 8302 times lower than face template and 9 times lower than existing ECG based biometric template). Lower size of the template substantially reduces the one-to-many matching time for biometric recognition, resulting in a faster biometric authentication mechanism.
A novel method for fabrication of biodegradable scaffolds with high compression moduli
DeGroot, JH; Kuijper, HW; Pennings, AJ
1997-01-01
It has been previously shown that, when used for meniscal reconstruction, porous copoly(L-lactide/epsilon-caprolactone) implants enhanced healing of meniscal lesions owing to their excellent adhesive properties. However, it appeared that the materials had an insufficient compression modulus to
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
Lossless Compression of Digital Images
DEFF Research Database (Denmark)
Martins, Bo
Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose coders...... version that is substantially faster than its precursorsand brings it close to the multi-pass coders in compression performance.Handprinted characters are of unequal complexity; recent work by Singer and Tishby demonstrates that utilizing the physiological process of writing one can synthesize cursive.......The feature vector of a bitmap initially constitutes a lossy representation of the contour(s) of the bitmap. The initial feature space is usually too large but can be reduced automatically by use ofa predictive code length or predictive error criterion....
Tantsufestival Momentum 2008 : Alatskivi lossi tunded, mõtted, emotsioonid / Reet Kruup
Kruup, Reet
2008-01-01
25.-27. aprillini Alatskivi Kaunite Kunstide Kuu raames toimunud tantsufestivalist Momentum 2008. Festivalil osalesid Alatskivi vallas tegutsevate tantsurühmade kõrval ka tantsuteater Tee Kuubis, kontakttantsurühmitus Kontakt Tartust ja Tartu Ülikooli Viljandi Kultuuriakadeemia lavakunstide osakonna tudengid, kes esitasid improvisatsioonilise tantsuetenduse "Alatskivi lossi tunded, mõtted, emotsioonid..."
A lossy graph model for delay reduction in generalized instantly decodable network coding
Douik, Ahmed S.; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2014-01-01
, arising from lossy feedback events, when the expected decoding delay of XORing them among themselves or with other certain packets is lower than that expected when sending these packets separately. We compare the decoding delay performance of LG-IDNC and G
Lossi hoovis mälestati veretöö ohvreid / Veljo Kuivjõgi
Kuivjõgi, Veljo, 1951-
2006-01-01
Endel Püüa raamatu "Punane terror Saaremaal 1941. aastal" (Saaremaa : Saaremaa Muuseum, 2006) esitlusest ja mälestuspäevast, mis oli pühendatud kõigile ohvritele, kes hukati 1941. aasta suvel Kuressaare lossi hoovis
Directory of Open Access Journals (Sweden)
R. MEDEIROS
Full Text Available ABSTRACT This study was conducted with the aim of evaluating the influence of different methods for end surface preparation of compressive strength test specimens. Four different methods were compared: a mechanical wear method through grinding using a diamond wheel established by NBR 5738; a mechanical wear method using a diamond saw which is established by NM 77; an unbonded system using neoprene pads in metal retainer rings established by C1231 and a bonded capping method with sulfur mortar established by NBR 5738 and by NM 77. To develop this research, 4 concrete mixes were determined with different strength levels, 2 of group 1 and 2 of group 2 strength levels established by NBR 8953. Group 1 consists of classes C20 to C50, 5 in 5MPa, also known as normal strength concrete. Group 2 is comprised of class C55, C60 to C100, 10 in 10 MPa, also known as high strength concrete. Compression tests were carried out at 7 and 28 days for the 4 surface preparation methods. The results of this study indicate that the method established by NBR 5738 is the most effective among the 4 strengths considered, once it presents lower dispersion of values obtained from the tests, measured by the coefficient of variation and, in almost all cases, it demonstrates the highest mean of rupture test. The method described by NBR 5738 achieved the expected strength level in all tests.
Directory of Open Access Journals (Sweden)
Chung-Liang Chang
2014-01-01
Full Text Available A compressive sensing based array processing method is proposed to lower the complexity, and computation load of array system and to maintain the robust antijam performance in global navigation satellite system (GNSS receiver. Firstly, the spatial and temporal compressed matrices are multiplied with array signal, which results in a small size array system. Secondly, the 2-dimensional (2D minimum variance distortionless response (MVDR beamformer is employed in proposed system to mitigate the narrowband and wideband interference simultaneously. The iterative process is performed to find optimal spatial and temporal gain vector by MVDR approach, which enhances the steering gain of direction of arrival (DOA of interest. Meanwhile, the null gain is set at DOA of interference. Finally, the simulated navigation signal is generated offline by the graphic user interface tool and employed in the proposed algorithm. The theoretical analysis results using the proposed algorithm are verified based on simulated results.
AbuAlSaud, Moataz
2012-07-01
The purpose of this thesis is to solve unsteady two-dimensional compressible Navier-Stokes equations for a moving mesh using implicit explicit (IMEX) Runge- Kutta scheme. The moving mesh is implemented in the equations using Arbitrary Lagrangian Eulerian (ALE) formulation. The inviscid part of the equation is explicitly solved using second-order Godunov method, whereas the viscous part is calculated implicitly. We simulate subsonic compressible flow over static NACA-0012 airfoil at different angle of attacks. Finally, the moving mesh is examined via oscillating the airfoil between angle of attack = 0 and = 20 harmonically. It is observed that the numerical solution matches the experimental and numerical results in the literature to within 20%.
SU-E-J-18: Evaluation of the Effectiveness of Compression Methods in SBRT for Lung.
Liao, Y; Tolekids, G; Yao, R; Templeton, A; Sensakovic, W; Chu, J
2012-06-01
This study aims to evaluate the effectiveness of compression in immobilizing tumor during stereotactic body radiotherapy (SBRT) for lung cancer. Published data have demonstrated bigger respiratory motion in lower lobe than in upper lobe during normal breathing. We hypothesize that 4DCT-based patient selection and abdominal compression would immobilize lung tumor volumes effectively, regardless of their location. We retrospectively reviewed 12 SBRT lung cases treated with Trilogy® (Varian Medical System, Palo Alto, CA). Either compression plate or Vac-LokTM was used as abdomen compression of the SBRT immobilization system (Body Pro-LokTM, CIVCO) to restrict patients' breathing during CT simulation and treatment delivery. These cases are grouped into 2 categories: lower and upper lobe tumor, each with 6 cases. Records for 33 treatments were studied. On each treatment day, the patient was set up to the bony anatomy using kV-kV-match. A CBCT was performed to further set up the patient to the tumor based on the soft tissue information. The shifts from CBCT-setup were analyzed as displacement vectors demonstrating the magnitude of the tumor motion relative to the bony anatomy. The mean magnitude of displacement vectors for upper lobe and lower lobe were 3.7±2.7 and 4.2±6.3, [1S.D.] mm, respectively. The Wilcoxon rank sum test indicates that the difference in the displacement vector between the two groups is not statistically significant (p-value = 0.33). The magnitude of shifts from CBCT were small with mean value <5mm in SBRT lung treatments. No statistically significant difference were observed in the displacement of tumor between lower and upper lobes. With limited sample size, this suggests that our current 4DCT screening/abdominal compression approach is effective in restricting the respiration-induced tumor motion despite its location within the lung. We plan to confirm this Result in additional patients. © 2012 American Association of Physicists in Medicine.
Algorithm for Compressing Time-Series Data
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Directory of Open Access Journals (Sweden)
J. Puskely
2010-06-01
Full Text Available The novel approach exploits the principle of the conventional two-plane amplitude measurements for the reconstruction of the unknown electric field distribution on the antenna aperture. The method combines a global optimization with a compression method. The global optimization method (GO is used to minimize the functional, and the compression method is used to reduce the number of unknown variables. The algorithm employs the Real Coded Genetic Algorithm (RCGA as the global optimization approach. The Discrete Cosine Transform (DCT and the Discrete Wavelet Transform (DWT are applied to reduce the number of unknown variables. Pros and cons of methods are investigated and reported for the solution of the problem. In order to make the algorithm faster, exploitation of amplitudes from a single scanning plane is also discussed. First, the algorithm is used to obtain an initial estimate. Subsequently, the common Fourier iterative algorithm is used to reach global minima with sufficient accuracy. The method is examined measuring the dish antenna.
Directory of Open Access Journals (Sweden)
F. Saez de Adana
2009-01-01
Full Text Available This paper presents an efficient application of the Time-Domain Uniform Theory of Diffraction (TD-UTD for the analysis of Ultra-Wideband (UWB mobile communications for indoor environments. The classical TD-UTD formulation is modified to include the contribution of lossy materials and multiple-ray interactions with the environment. The electromagnetic analysis is combined with a ray-tracing acceleration technique to treat realistic and complex environments. The validity of this method is tested with measurements performed inside the Polytechnic building of the University of Alcala and shows good performance of the model for the analysis of UWB propagation.
Jridi, Maher; Alfalou, Ayman
2018-03-01
In this paper, enhancement of an existing optical simultaneous fusion, compression and encryption (SFCE) scheme in terms of real-time requirements, bandwidth occupation and encryption robustness is proposed. We have used and approximate form of the DCT to decrease the computational resources. Then, a novel chaos-based encryption algorithm is introduced in order to achieve the confusion and diffusion effects. In the confusion phase, Henon map is used for row and column permutations, where the initial condition is related to the original image. Furthermore, the Skew Tent map is employed to generate another random matrix in order to carry out pixel scrambling. Finally, an adaptation of a classical diffusion process scheme is employed to strengthen security of the cryptosystem against statistical, differential, and chosen plaintext attacks. Analyses of key space, histogram, adjacent pixel correlation, sensitivity, and encryption speed of the encryption scheme are provided, and favorably compared to those of the existing crypto-compression system. The proposed method has been found to be digital/optical implementation-friendly which facilitates the integration of the crypto-compression system on a very broad range of scenarios.
Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding
Directory of Open Access Journals (Sweden)
Yongjian Nian
2013-01-01
Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.
Czech Academy of Sciences Publication Activity Database
Mishra, A. Deepak; Srigyan, M.; Basu, A.; Rokade, P. J.
2015-01-01
Roč. 80, December 2015 (2015), s. 418-424 ISSN 1365-1609 Institutional support: RVO:68145535 Keywords : uniaxial compressive strength * rock indices * fuzzy inference system * artificial neural network * adaptive neuro-fuzzy inference system Subject RIV: DH - Mining, incl. Coal Mining Impact factor: 2.010, year: 2015 http://ac.els-cdn.com/S1365160915300708/1-s2.0-S1365160915300708-main.pdf?_tid=318a7cec-8929-11e5-a3b8-00000aacb35f&acdnat=1447324752_2a9d947b573773f88da353a16f850eac
Colera, Manuel; Pérez-Saborid, Miguel
2017-09-01
A finite differences scheme is proposed in this work to compute in the time domain the compressible, subsonic, unsteady flow past an aerodynamic airfoil using the linearized potential theory. It improves and extends the original method proposed in this journal by Hariharan, Ping and Scott [1] by considering: (i) a non-uniform mesh, (ii) an implicit time integration algorithm, (iii) a vectorized implementation and (iv) the coupled airfoil dynamics and fluid dynamic loads. First, we have formulated the method for cases in which the airfoil motion is given. The scheme has been tested on well known problems in unsteady aerodynamics -such as the response to a sudden change of the angle of attack and to a harmonic motion of the airfoil- and has been proved to be more accurate and efficient than other finite differences and vortex-lattice methods found in the literature. Secondly, we have coupled our method to the equations governing the airfoil dynamics in order to numerically solve problems where the airfoil motion is unknown a priori as happens, for example, in the cases of the flutter and the divergence of a typical section of a wing or of a flexible panel. Apparently, this is the first self-consistent and easy-to-implement numerical analysis in the time domain of the compressible, linearized coupled dynamics of the (generally flexible) airfoil-fluid system carried out in the literature. The results for the particular case of a rigid airfoil show excellent agreement with those reported by other authors, whereas those obtained for the case of a cantilevered flexible airfoil in compressible flow seem to be original or, at least, not well-known.
International Nuclear Information System (INIS)
Li Jin; Jin Long-Xu; Zhang Ran-Feng
2013-01-01
Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low-complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in multispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian—Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band
Choo, Hyunwook; Nam, Hongyeop; Lee, Woojin
2017-12-01
The composition of naturally cemented deposits is very complicated; thus, estimating the maximum shear modulus (Gmax, or shear modulus at very small strains) of cemented sands using the previous empirical formulas is very difficult. The purpose of this experimental investigation is to evaluate the effects of particle size and cement type on the Gmax and unconfined compressive strength (qucs) of cemented sands, with the ultimate goal of estimating Gmax of cemented sands using qucs. Two sands were artificially cemented using Portland cement or gypsum under varying cement contents (2%-9%) and relative densities (30%-80%). Unconfined compression tests and bender element tests were performed, and the results from previous studies of two cemented sands were incorporated in this study. The results of this study demonstrate that the effect of particle size on the qucs and Gmax of four cemented sands is insignificant, and the variation of qucs and Gmax can be captured by the ratio between volume of void and volume of cement. qucs and Gmax of sand cemented with Portland cement are greater than those of sand cemented with gypsum. However, the relationship between qucs and Gmax of the cemented sand is not affected by the void ratio, cement type and cement content, revealing that Gmax of the complex naturally cemented soils with unknown in-situ void ratio, cement type and cement content can be estimated using qucs.
Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.
2015-03-01
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.
International Nuclear Information System (INIS)
Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W
2015-01-01
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced. (paper)
Energy Technology Data Exchange (ETDEWEB)
Kawata, Ryo [Gifu Univ. (Japan). Faculty of Medicine
1982-07-01
1) The upper abdominal compression method was easily applicable for CT examination in practically all the patients. It gave no harm and considerably improved CT diagnosis. 2) The materials used for compression were foamed polystyrene, the Mix-Dp and a water bag. When CT examination was performed to diagnose such lesions as a circumscribed tumor, compression with the Mix-Dp was most useful, and when it was performed for screening examination of upper abdominal diseases, compression with a water bag was most effective. 3) Improvement in contour-depicting ability of CT by the compression method was most marked at the body of the pancreas, followed by the head of the pancreas and the posterior surface of the left lobe of the liver. Slight improvement was seen also at the tail of the pancreas and the left adrenal gland. 4) Improvement in organ-depicting ability of CT by the compression method was estimated by a 4-category classification method. It was found that the improvement was most marked at the body and the head of the pancreas. Considerable improvement was observed also at the left lobe of the liver and the both adrenal glands. Little improvement was obtained at the spleen. When contrast enhancement was combined with the compression method, improvement at such organs which were liable to be enhanced, as the liver and the adrenal glands, was promoted, while the organ-depicting ability was decreased at the pancreas. 5) By comparing the CT image under compression with that without compression, continuous infiltrations of gastric cancer into the body and the tail of the pancreas in 2 cases and a retroperitoneal infiltration of pancreatic tumor in 1 case were diagnosed preoperatively.
Czech Academy of Sciences Publication Activity Database
Kosík, Adam; Feistauer, M.; Hadrava, Martin; Horáček, Jaromír
2015-01-01
Roč. 267, September (2015), s. 382-396 ISSN 0096-3003 R&D Projects: GA ČR(CZ) GAP101/11/0207 Institutional support: RVO:61388998 Keywords : discontinuous Galerkin method * nonlinear elasticity * compressible viscous flow * fluid–structure interaction Subject RIV: BI - Acoustics Impact factor: 1.345, year: 2015 http://www.sciencedirect.com/science/article/pii/S0096300315002453/pdfft?md5=02d46bc730e3a7fb8a5008aaab1da786&pid=1-s2.0-S0096300315002453-main.pdf
Investigation of Surface Pre-Treatment Methods for Wafer-Level Cu-Cu Thermo-Compression Bonding
Directory of Open Access Journals (Sweden)
Koki Tanaka
2016-12-01
Full Text Available To increase the yield of the wafer-level Cu-Cu thermo-compression bonding method, certain surface pre-treatment methods for Cu are studied which can be exposed to the atmosphere before bonding. To inhibit re-oxidation under atmospheric conditions, the reduced pure Cu surface is treated by H2/Ar plasma, NH3 plasma and thiol solution, respectively, and is covered by Cu hydride, Cu nitride and a self-assembled monolayer (SAM accordingly. A pair of the treated wafers is then bonded by the thermo-compression bonding method, and evaluated by the tensile test. Results show that the bond strengths of the wafers treated by NH3 plasma and SAM are not sufficient due to the remaining surface protection layers such as Cu nitride and SAMs resulting from the pre-treatment. In contrast, the H2/Ar plasma–treated wafer showed the same strength as the one with formic acid vapor treatment, even when exposed to the atmosphere for 30 min. In the thermal desorption spectroscopy (TDS measurement of the H2/Ar plasma–treated Cu sample, the total number of the detected H2 was 3.1 times more than the citric acid–treated one. Results of the TDS measurement indicate that the modified Cu surface is terminated by chemisorbed hydrogen atoms, which leads to high bonding strength.
Security of modified Ping-Pong protocol in noisy and lossy channel.
Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu
2014-05-12
The "Ping-Pong" (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove the security of this modified PP protocol against collective attacks when the noisy and lossy channel is taken into account. Simulation results show that our protocol is practical.
Lossy/Lossless Floating/Grounded Inductance Simulation Using One DDCC
Directory of Open Access Journals (Sweden)
M. A. Ibrahim
2012-04-01
Full Text Available In this work, we present new topologies for realizing one lossless grounded inductor and two floating, one lossless and one lossy, inductors employing a single differential difference current conveyor (DDCC and a minimum number of passive components, two resistors, and one grounded capacitor. The floating inductors are based on ordinary dual-output differential difference current conveyor (DO-DDCC while the grounded lossless inductor is based one a modified dual-output differential difference current conveyor (MDO-DDCC. The proposed lossless floating inductor is obtained from the lossy one by employing a negative impedance converter (NIC. The non-ideality effects of the active element on the simulated inductors are investigated. To demonstrate the performance of the proposed grounded inductance simulator as an example, it is used to construct a parallel resonant circuit. SPICE simulation results are given to confirm the theoretical analysis.
Scattering by multiple parallel radially stratified infinite cylinders buried in a lossy half space.
Lee, Siu-Chun
2013-07-01
The theoretical solution for scattering by an arbitrary configuration of closely spaced parallel infinite cylinders buried in a lossy half space is presented in this paper. The refractive index and permeability of the half space and cylinders are complex in general. Each cylinder is radially stratified with a distinct complex refractive index and permeability. The incident radiation is an arbitrarily polarized plane wave propagating in the plane normal to the axes of the cylinders. Analytic solutions are derived for the electric and magnetic fields and the Poynting vector of backscattered radiation emerging from the half space. Numerical examples are presented to illustrate the application of the scattering solution to calculate backscattering from a lossy half space containing multiple homogeneous and radially stratified cylinders at various depths and different angles of incidence.
A new method for robust video watermarking resistant against key estimation attacks
Mitekin, Vitaly
2015-12-01
This paper presents a new method for high-capacity robust digital video watermarking and algorithms of embedding and extraction of watermark based on this method. Proposed method uses password-based two-dimensional pseudonoise arrays for watermark embedding, making brute-force attacks aimed at steganographic key retrieval mostly impractical. Proposed algorithm for 2-dimensional "noise-like" watermarking patterns generation also allows to significantly decrease watermark collision probability ( i.e. probability of correct watermark detection and extraction using incorrect steganographic key or password).. Experimental research provided in this work also shows that simple correlation-based watermark detection procedure can be used, providing watermark robustness against lossy compression and watermark estimation attacks. At the same time, without decreasing robustness of embedded watermark, average complexity of the brute-force key retrieval attack can be increased to 1014 watermark extraction attempts (compared to 104-106 for a known robust watermarking schemes). Experimental results also shows that for lowest embedding intensity watermark preserves it's robustness against lossy compression of host video and at the same time preserves higher video quality (PSNR up to 51dB) compared to known wavelet-based and DCT-based watermarking algorithms.
International Nuclear Information System (INIS)
Kokh, S.
2001-01-01
This research thesis reports the development of a numerical direct simulation of compressible two-phase flows by using interface capturing methods. These techniques are based on the use of an Eulerian fixed grid to describe flow variables as well as the interface between fluids. The author first recalls conventional interface capturing methods and makes the distinction between those based on discontinuous colour functions and those based on level set functions. The approach is then extended to a five equation model to allow the largest as possible choice of state equations for the fluids. Three variants are developed. A solver inspired by the Roe scheme is developed for one of them. These interface capturing methods are then refined, more particularly for problems of numerical diffusion at the interface. A last part addresses the study of dynamic phase change. Non-conventional thermodynamics tools are used to study the structures of an interface which performs phase transition [fr
Experiments with automata compression
Daciuk, J.; Yu, S; Daley, M; Eramian, M G
2001-01-01
Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on
International Nuclear Information System (INIS)
Pettersen, G.; Ostgaard, E.
1988-01-01
The pressure and the compressibility of solid H 2 and D 2 are obtained from ground-state energies calculated by means of a modified variational lowest order constrained-variation (LOCV) method. Both fcc and hcp structures are considered, but results are given for the fcc structure only. The pressure and the compressibility are calculated or estimated from the dependence of the ground-state energy on density or molar volume, generally in a density region of 0.65σ -3 to 1.3σ -3 , corresponding to a molar volume of 0.65σ -3 to 1.3σ -3 , corresponding to a molar volume of 12-24 cm 3 mole, where σ = 2.958 angstrom, and the calculations are done for five different two-body potentials. Theoretical results for the pressure are 340-460 atm for solid H 2 at a particle density of 0.82σ -3 or a molar volume of 19 cm 3 /mole, and 370-490 atm for solid 4 He at a particle density of 0.92σ -3 or a molar volume of 17 cm 3 /mole. The corresponding experimental results are 650 and 700 atm, respectively. Theoretical results for the compressibility are 210 times 10 -6 to 260 times 10 -6 atm -1 for solid H 2 at a particle density of 0.82σ -3 or a molar volume of 19 cm 3 /mole, and 150 times 10 -6 to 180 times 10 -6 atm -1 for solid D 2 at a particle density of 0.92σ -3 or a molar volume of 17 cm 3 mole. The corresponding experimental results are 180 times 10 -6 and 140 times 10 -6 atm -1 , respectively. The agreement with experimental results is better for higher densities
Radiological Image Compression
Lo, Shih-Chung Benedict
The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.
Comparison of Methods to Predict Lower Bound Buckling Loads of Cylinders Under Axial Compression
Haynie, Waddy T.; Hilburger, Mark W.
2010-01-01
Results from a numerical study of the buckling response of two different orthogrid stiffened circular cylindrical shells with initial imperfections and subjected to axial compression are used to compare three different lower bound buckling load prediction techniques. These lower bound prediction techniques assume different imperfection types and include an imperfection based on a mode shape from an eigenvalue analysis, an imperfection caused by a lateral perturbation load, and an imperfection in the shape of a single stress-free dimple. The STAGS finite element code is used for the analyses. Responses of the cylinders for ranges of imperfection amplitudes are considered, and the effect of each imperfection is compared to the response of a geometrically perfect cylinder. Similar behavior was observed for shells that include a lateral perturbation load and a single dimple imperfection, and the results indicate that the predicted lower bounds are much less conservative than the corresponding results for the cylinders with the mode shape imperfection considered herein. In addition, the lateral perturbation technique and the single dimple imperfection produce response characteristics that are physically meaningful and can be validated via testing.
Liu, Yong; Shu, Chi-Wang; Zhang, Mengping
2018-02-01
We present a discontinuous Galerkin (DG) scheme with suitable quadrature rules [15] for ideal compressible magnetohydrodynamic (MHD) equations on structural meshes. The semi-discrete scheme is analyzed to be entropy stable by using the symmetrizable version of the equations as introduced by Godunov [32], the entropy stable DG framework with suitable quadrature rules [15], the entropy conservative flux in [14] inside each cell and the entropy dissipative approximate Godunov type numerical flux at cell interfaces to make the scheme entropy stable. The main difficulty in the generalization of the results in [15] is the appearance of the non-conservative "source terms" added in the modified MHD model introduced by Godunov [32], which do not exist in the general hyperbolic system studied in [15]. Special care must be taken to discretize these "source terms" adequately so that the resulting DG scheme satisfies entropy stability. Total variation diminishing / bounded (TVD/TVB) limiters and bound-preserving limiters are applied to control spurious oscillations. We demonstrate the accuracy and robustness of this new scheme on standard MHD examples.
Muralidharan, Balaji; Menon, Suresh
2018-03-01
A high-order adaptive Cartesian cut-cell method, developed in the past by the authors [1] for simulation of compressible viscous flow over static embedded boundaries, is now extended for reacting flow simulations over moving interfaces. The main difficulty related to simulation of moving boundary problems using immersed boundary techniques is the loss of conservation of mass, momentum and energy during the transition of numerical grid cells from solid to fluid and vice versa. Gas phase reactions near solid boundaries can produce huge source terms to the governing equations, which if not properly treated for moving boundaries, can result in inaccuracies in numerical predictions. The small cell clustering algorithm proposed in our previous work is now extended to handle moving boundaries enforcing strict conservation. In addition, the cell clustering algorithm also preserves the smoothness of solution near moving surfaces. A second order Runge-Kutta scheme where the boundaries are allowed to change during the sub-time steps is employed. This scheme improves the time accuracy of the calculations when the body motion is driven by hydrodynamic forces. Simple one dimensional reacting and non-reacting studies of moving piston are first performed in order to demonstrate the accuracy of the proposed method. Results are then reported for flow past moving cylinders at subsonic and supersonic velocities in a viscous compressible flow and are compared with theoretical and previously available experimental data. The ability of the scheme to handle deforming boundaries and interaction of hydrodynamic forces with rigid body motion is demonstrated using different test cases. Finally, the method is applied to investigate the detonation initiation and stabilization mechanisms on a cylinder and a sphere, when they are launched into a detonable mixture. The effect of the filling pressure on the detonation stabilization mechanisms over a hyper-velocity sphere launched into a hydrogen
Solikin Mochamad; Setiawan Budi
2017-01-01
High volume fly ash concrete becomes one of alternatives to produce green concrete as it uses waste material and significantly reduces the utilization of Portland cement in concrete production. Although using less cement, its compressive strength is comparable to ordinary Portland cement (hereafter OPC) and the its durability increases significantly. This paper reports investigation on the effect of design strength, fly ash content and curing method on compressive strength of High Volume Fly ...
Directory of Open Access Journals (Sweden)
Yudong Zhang
2016-01-01
Full Text Available Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS. It is composed of three successful components: (i exponential wavelet transform, (ii iterative shrinkage-thresholding algorithm, and (iii random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches.
Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping
2016-01-01
Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068
International Nuclear Information System (INIS)
Manthei, G.; Eisenblaetter, J.; Moriya, H.; Niitsuma, H.; Jones, R.H.
2003-01-01
Collapsing is a relatively new method. It is used for detecting patterns and structures in blurred and cloudy pictures of multiple soundings. In the case described here, the measurements were made in a very small region with a length of only a few decimeters. The events were registered during a triaxial compression experiment on a compact block of rock salt. The collapsing method showed a cellular structure of the salt block across the whole length of the test piece. The cells had a length of several cm, enclosing several grains of salt with an average grain size of less than one cm. In view of the fact that not all cell walls corresponded to acoustic emission events, it was assumed that only those grain boundaries are activated that are oriented at a favourable angle to the field of tension of the test piece [de
Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...
Method for Cleanly and Precisely Breaking Off a Rock Core Using a Radial Compressive Force
Richardson, Megan; Lin, Justin
2011-01-01
The Mars Sample Return mission has the goal to drill, break off, and retain rock core samples. After some results gained from rock core mechanics testing, the realization that scoring teeth would cleanly break off the core after only a few millimeters of penetration, and noting that rocks are weak in tension, the idea was developed to use symmetric wedging teeth in compression to weaken and then break the core at the contact plane. This concept was developed as a response to the break-off and retention requirements. The wedges wrap around the estimated average diameter of the core to get as many contact locations as possible, and are then pushed inward, radially, through the core towards one another. This starts a crack and begins to apply opposing forces inside the core to propagate the crack across the plane of contact. The advantage is in the simplicity. Only two teeth are needed to break five varieties of Mars-like rock cores with limited penetration and reasonable forces. Its major advantage is that it does not require any length of rock to be attached to the parent in order to break the core at the desired location. Test data shows that some rocks break off on their own into segments or break off into discs. This idea would grab and retain a disc, push some discs upward and others out, or grab a segment, break it at the contact plane, and retain the portion inside of the device. It also does this with few moving parts in a simple, space-efficient design. This discovery could be implemented into a coring drill bit to precisely break off and retain any size rock core.
van der Vegt, Jacobus J.W.; van der Ven, H.
1998-01-01
A new discretization method for the three-dimensional Euler equations of gas dynamics is presented, which is based on the discontinuous Galerkin finite element method. Special attention is paid to an efficient implementation of the discontinuous Galerkin method that minimizes the number of flux
International Nuclear Information System (INIS)
Zhu, Qiong-gan; Wang, Zhi-guo; Tan, Wei
2014-01-01
The combined effect of side-coupled gain cavity and lossy cavity on the plasmonic response of metal-dielectric-metal (MDM) surface plasmon polariton (SPP) waveguide is investigated theoretically using Green's function method. Our result suggests that the gain and loss parameters influence the amplitude and phase of the fields localized in the two cavities. For the case of balanced gain and loss, the fields of the two cavities are always of equi-amplitude but out of phase. A plasmon induced transparency (PIT)-like transmission peak can be achieved by the destructive interference of two fields with anti-phase. For the case of unbalanced gain and loss, some unexpected responses of structure are generated. When the gain is more than the loss, the system response is dissipative at around the resonant frequency of the two cavities, where the sum of reflectance and transmittance becomes less than one. This is because the lossy cavity, with a stronger localized field, makes the main contribution to the system response. When the gain is less than the loss, the reverse is true. It is found that the metal loss dissipates the system energy but facilitates the gain cavity to make a dominant effect on the system response. This mechanism may have a potential application for optical amplification and for a plasmonic waveguide switch. (paper)
Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi
2016-05-23
A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.
An analytical look at the effects of compression on medical images
Persons, Kenneth; Palisson, Patrice; Manduca, Armando; Erickson, Bradley J.; Savcenko, Vladimir
1997-01-01
This article will take an analytical look at how lossy Joint Photographic Experts Group (JPEG) and wavelet image compression techniques affect medical image content. It begins with a brief explanation of how the JPEG and wavelet algorithms work, and describes in general terms what effect they can have on image quality (removal of noise, blurring, and artifacts). It then focuses more specifically on medical image diagnostic content and explains why subtle pathologies, that may be difficult for...
Compressed sensing & sparse filtering
Carmi, Avishy Y; Godsill, Simon J
2013-01-01
This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary.Â Apart from compressed sensing this book contains other related app
International Nuclear Information System (INIS)
Hao, W; Jinji, G
2012-01-01
Compressing the vibration signal of a rolling bearing has important significance to wireless monitoring and remote diagnosis of fans and pumps which is widely used in the petrochemical industry. In this paper, according to the characteristics of the vibration signal in a rolling bearing, a compression method based on the optimal selection of wavelet packet basis is proposed. We analyze several main attributes of wavelet packet basis and the effect to the compression of the vibration signal in a rolling bearing using wavelet packet transform in various compression ratios, and proposed a method to precisely select a wavelet packet basis. Through an actual signal, we come to the conclusion that an orthogonal wavelet packet basis with low vanishing moment should be used to compress the vibration signal of a rolling bearing to get an accurate energy proportion between the feature bands in the spectrum of reconstructing the signal. Within these low vanishing moments, orthogonal wavelet packet basis, and 'coif' wavelet packet basis can obtain the best signal-to-noise ratio in the same compression ratio for its best symmetry.
Compressive force-path method unified ultimate limit-state design of concrete structures
Kotsovos, Michael D
2014-01-01
This book presents a method which simplifies and unifies the design of reinforced concrete (RC) structures and is applicable to any structural element under both normal and seismic loading conditions. The proposed method has a sound theoretical basis and is expressed in a unified form applicable to all structural members, as well as their connections. It is applied in practice through the use of simple failure criteria derived from first principles without the need for calibration through the use of experimental data. The method is capable of predicting not only load-carrying capacity but also the locations and modes of failure, as well as safeguarding the structural performance code requirements. In this book, the concepts underlying the method are first presented for the case of simply supported RC beams. The application of the method is progressively extended so as to cover all common structural elements. For each structural element considered, evidence of the validity of the proposed method is presented t...
High thermal conductivity lossy dielectric using a multi layer configuration
Tiegs, Terry N.; Kiggans, Jr., James O.
2003-01-01
Systems and methods are described for loss dielectrics. A loss dielectric includes at least one high dielectric loss layer and at least one high thermal conductivity-electrically insulating layer adjacent the at least one high dielectric loss layer. A method of manufacturing a loss dielectric includes providing at least one high dielectric loss layer and providing at least one high thermal conductivity-electrically insulating layer adjacent the at least one high dielectric loss layer. The systems and methods provide advantages because the loss dielectrics are less costly and more environmentally friendly than the available alternatives.
Uma Vetri Selvi, G; Nadarajan, R
2015-12-01
Compression techniques are vital for efficient storage and fast transfer of medical image data. The existing compression techniques take significant amount of time for performing encoding and decoding and hence the purpose of compression is not fully satisfied. In this paper a rapid 4-D lossy compression method constructed using data rearrangement, wavelet-based contourlet transformation and a modified binary array technique has been proposed for functional magnetic resonance imaging (fMRI) images. In the proposed method, the image slices of fMRI data are rearranged so that the redundant slices form a sequence. The image sequence is then divided into slices and transformed using wavelet-based contourlet transform (WBCT). In WBCT, the high frequency sub-band obtained from wavelet transform is further decomposed into multiple directional sub-bands by directional filter bank to obtain more directional information. The relationship between the coefficients has been changed in WBCT as it has more directions. The differences in parent–child relationships are handled by a repositioning algorithm. The repositioned coefficients are then subjected to quantization. The quantized coefficients are further compressed by modified binary array technique where the most frequently occurring value of a sequence is coded only once. The proposed method has been experimented with fMRI images the results indicated that the processing time of the proposed method is less compared to existing wavelet-based set partitioning in hierarchical trees and set partitioning embedded block coder (SPECK) compression schemes [1]. The proposed method could also yield a better compression performance compared to wavelet-based SPECK coder. The objective results showed that the proposed method could gain good compression ratio in maintaining a peak signal noise ratio value of above 70 for all the experimented sequences. The SSIM value is equal to 1 and the value of CC is greater than 0.9 for all
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
Rojali, Salman, Afan Galih; George
2017-08-01
Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.
Mohammed, Monzoorul Haque; Dutta, Anirban; Bose, Tungadri; Chadaram, Sudha; Mande, Sharmila S
2012-10-01
An unprecedented quantity of genome sequence data is currently being generated using next-generation sequencing platforms. This has necessitated the development of novel bioinformatics approaches and algorithms that not only facilitate a meaningful analysis of these data but also aid in efficient compression, storage, retrieval and transmission of huge volumes of the generated data. We present a novel compression algorithm (DELIMINATE) that can rapidly compress genomic sequence data in a loss-less fashion. Validation results indicate relatively higher compression efficiency of DELIMINATE when compared with popular general purpose compression algorithms, namely, gzip, bzip2 and lzma. Linux, Windows and Mac implementations (both 32 and 64-bit) of DELIMINATE are freely available for download at: http://metagenomics.atc.tcs.com/compression/DELIMINATE. sharmila@atc.tcs.com Supplementary data are available at Bioinformatics online.
Lossy effects in a nonlinear nematic optical fiber
Rodríguez, R. F.; Reyes, J. A.
2001-09-01
We use the multiple scales method to derive a generalized nonlinear Schrödinger equation that takes into account the dissipative effects in the reorientation of a nematic confined in a cylindrical waveguide. This equation has soliton-like solutions and predicts a decrease in the penetration length of the optical solitons for each propagating mode with respect to the dissipationless case.
Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows
Zwick, David; Hackl, Jason; Balachandar, S.
2017-11-01
Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.
Kawahara, Mutsuto
2016-01-01
This book focuses on the finite element method in fluid flows. It is targeted at researchers, from those just starting out up to practitioners with some experience. Part I is devoted to the beginners who are already familiar with elementary calculus. Precise concepts of the finite element method remitted in the field of analysis of fluid flow are stated, starting with spring structures, which are most suitable to show the concepts of superposition/assembling. Pipeline system and potential flow sections show the linear problem. The advection–diffusion section presents the time-dependent problem; mixed interpolation is explained using creeping flows, and elementary computer programs by FORTRAN are included. Part II provides information on recent computational methods and their applications to practical problems. Theories of Streamline-Upwind/Petrov–Galerkin (SUPG) formulation, characteristic formulation, and Arbitrary Lagrangian–Eulerian (ALE) formulation and others are presented with practical results so...
Directory of Open Access Journals (Sweden)
Solikin Mochamad
2017-01-01
Full Text Available High volume fly ash concrete becomes one of alternatives to produce green concrete as it uses waste material and significantly reduces the utilization of Portland cement in concrete production. Although using less cement, its compressive strength is comparable to ordinary Portland cement (hereafter OPC and the its durability increases significantly. This paper reports investigation on the effect of design strength, fly ash content and curing method on compressive strength of High Volume Fly Ash Concrete. The experiment and data analysis were prepared using minitab, a statistic software for design of experimental. The specimens were concrete cylinder with diameter of 15 cm and height of 30 cm, tested for its compressive strength at 56 days. The result of the research demonstrates that high volume fly ash concrete can produce comparable compressive strength which meets the strength of OPC design strength especially for high strength concrete. In addition, the best mix proportion to achieve the design strength is the combination of high strength concrete and 50% content of fly ash. Moreover, the use of spraying method for curing method of concrete on site is still recommended as it would not significantly reduce the compressive strength result.
Karl Romstad
1964-01-01
Methods of obtaining strength and elastic properties of plastic laminates reinforced with unwoven glass fibers were evaluated using the criteria of the strength values obtained and the failure characteristics observed. Variables investigated were specimen configuration and the manner of supporting and loading the specimens. Results of this investigation indicate that...
Studies of imaging characteristics for a slab of a lossy left-handed material
International Nuclear Information System (INIS)
Shen Linfang; He Sailing
2003-01-01
The characteristics of an imaging system formed by a slab of a lossy left-handed material (LHM) are studied. The transfer function of the LHM imaging system is written in an appropriate product form with each term having a clear physical interpretation. A tiny loss of the LHM may suppress the transmission of evanescent waves through the LHM slab and this is explained physically. An analytical expression for the resolution of the imaging system is derived. It is shown that it is impossible to make a subwavelength imaging by using a realistic LHM imaging system unless the LHM slab is much thinner than the wavelength
The Application of RPL Routing Protocol in Low Power Wireless Sensor and Lossy Networks
Directory of Open Access Journals (Sweden)
Xun Yang
2014-05-01
Full Text Available With the continuous development of computer information technology, wireless sensor has been successfully changed the mode of human life, at the same time, as one of the technologies continues to improve the future life, how to better integration with the RPL routing protocols together become one of research focuses in the current climate. This paper start from the wireless sensor network, briefly discusses the concept, followed by systematic exposition of RPL routing protocol developed background, relevant standards, working principle, topology and related terms, and finally explore the RPL routing protocol in wireless sensor low power lossy network applications.
Lossy effects on the lateral shifts in negative-phase-velocity medium
International Nuclear Information System (INIS)
You Yuan
2009-01-01
Theoretical investigations of the lateral shifts of the reflected and transmitted beams were performed, using the stationary-phase approach, for the planar interface of a conventional medium and a lossy negative-phase-velocity medium. The lateral shifts exhibit different behaviors beyond and below a certain angle, for both incident p-polarized and incident s-polarized plane waves. Loss in the negative-phase-velocity medium affects lateral shifts greatly, and may cause changes from negative to positive values for p-polarized incidence
Security of modified Ping-Pong protocol in noisy and lossy channel
Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu
2014-01-01
The “Ping-Pong” (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove ...
Socorro, A. B.; Corres, J. M.; Del Villar, I.; Matias, I. R.; Arregui, F. J.
2014-05-01
This work presents the development and test of an anti-gliadin antibodies biosensor based on lossy mode resonances (LMRs) to detect celiac disease. Several polyelectrolites were used to perform layer-by-layer assembly processes in order to generate the LMR and to fabricate a gliadin-embedded thin-film. The LMR shifted 20 nm when immersed in a 5 ppm anti-gliadin antibodies-PBS solution, what makes this bioprobe suitable for detecting celiac disease. This is the first time, to our knowledge, that LMRs are used to detect celiac disease and these results suppose promising prospects on the use of such phenomena as biological detectors.
International Nuclear Information System (INIS)
Torres, V.; Beruete, M.; Sánchez, P.; Del Villar, I.
2016-01-01
An indium tin oxide (ITO) refractometer based on the generation of lossy mode resonances (LMRs) and surface plasmon resonances (SPRs) is presented. Both LMRs and SPRs are excited, in a single setup, under grazing angle incidence with Kretschmann configuration in an ITO thin-film deposited on a glass slide. The sensing capabilities of the device are demonstrated using several solutions of glycerin and water with refractive indices ranging from 1.33 to 1.47. LMRs are excited in the visible range, from 617 nm to 682 nm under TE polarization and from 533 nm to 637 nm under TM polarization, with a maximum sensitivity of 700 nm/RIU and 1200 nm/RIU, respectively. For the SPRs, a sensing range between 1375 nm and 2494 nm with a maximum sensitivity of 8300 nm/RIU is measured under TM polarization. Experimental results are supported with numerical simulations based on a modification of the plane-wave method for a one-dimensional multilayer waveguide
Energy Technology Data Exchange (ETDEWEB)
Torres, V. [Antenna Group–TERALAB, Public University of Navarra, 31006 Pamplona (Spain); Beruete, M. [Antenna Group–TERALAB, Public University of Navarra, 31006 Pamplona (Spain); Institute of Smart Cities, Public University of Navarra, 31006 Pamplona (Spain); Sánchez, P. [Department of Electric and Electronic Engineering, Public University of Navarra, Pamplona 31006 (Spain); Del Villar, I. [Institute of Smart Cities, Public University of Navarra, 31006 Pamplona (Spain); Department of Electric and Electronic Engineering, Public University of Navarra, Pamplona 31006 (Spain)
2016-01-25
An indium tin oxide (ITO) refractometer based on the generation of lossy mode resonances (LMRs) and surface plasmon resonances (SPRs) is presented. Both LMRs and SPRs are excited, in a single setup, under grazing angle incidence with Kretschmann configuration in an ITO thin-film deposited on a glass slide. The sensing capabilities of the device are demonstrated using several solutions of glycerin and water with refractive indices ranging from 1.33 to 1.47. LMRs are excited in the visible range, from 617 nm to 682 nm under TE polarization and from 533 nm to 637 nm under TM polarization, with a maximum sensitivity of 700 nm/RIU and 1200 nm/RIU, respectively. For the SPRs, a sensing range between 1375 nm and 2494 nm with a maximum sensitivity of 8300 nm/RIU is measured under TM polarization. Experimental results are supported with numerical simulations based on a modification of the plane-wave method for a one-dimensional multilayer waveguide.
International Nuclear Information System (INIS)
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-01-01
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate O(1/k 2 ). In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques. (paper)
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-06-21
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.
Convergence of a numerical method for the compressible Navier-Stokes system on general domains
Czech Academy of Sciences Publication Activity Database
Feireisl, Eduard; Karper, T.; Michálek, Martin
2016-01-01
Roč. 134, č. 4 (2016), s. 667-704 ISSN 0029-599X R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : numerical methods * Navier - Stokes system Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016 http://link.springer.com/article/10.1007%2Fs00211-015-0786-6
Convergence of a numerical method for the compressible Navier-Stokes system on general domains
Czech Academy of Sciences Publication Activity Database
Feireisl, Eduard; Karper, T.; Michálek, Martin
2016-01-01
Roč. 134, č. 4 (2016), s. 667-704 ISSN 0029-599X R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985840 Keywords : numerical methods * Navier-Stokes system Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016 http://link.springer.com/article/10.1007%2Fs00211-015-0786-6
An h-p Taylor-Galerkin finite element method for compressible Euler equations
Demkowicz, L.; Oden, J. T.; Rachowicz, W.; Hardy, O.
1991-01-01
An extension of the familiar Taylor-Galerkin method to arbitrary h-p spatial approximations is proposed. Boundary conditions are analyzed, and a linear stability result for arbitrary meshes is given, showing the unconditional stability for the parameter of implicitness alpha not less than 0.5. The wedge and blunt body problems are solved with both linear, quadratic, and cubic elements and h-adaptivity, showing the feasibility of higher orders of approximation for problems with shocks.
International Nuclear Information System (INIS)
Saurel, Richard; Franquet, Erwin; Daniel, Eric; Le Metayer, Olivier
2007-01-01
A new projection method is developed for the Euler equations to determine the thermodynamic state in computational cells. It consists in the resolution of a mechanical relaxation problem between the various sub-volumes present in a computational cell. These sub-volumes correspond to the ones traveled by the various waves that produce states with different pressures, velocities, densities and temperatures. Contrarily to Godunov type schemes the relaxed state corresponds to mechanical equilibrium only and remains out of thermal equilibrium. The pressure computation with this relaxation process replaces the use of the conventional equation of state (EOS). A simplified relaxation method is also derived and provides a specific EOS (named the Numerical EOS). The use of the Numerical EOS gives a cure to spurious pressure oscillations that appear at contact discontinuities for fluids governed by real gas EOS. It is then extended to the computation of interface problems separating fluids with different EOS (liquid-gas interface for example) with the Euler equations. The resulting method is very robust, accurate, oscillation free and conservative. For the sake of simplicity and efficiency the method is developed in a Lagrange-projection context and is validated over exact solutions. In a companion paper [F. Petitpas, E. Franquet, R. Saurel, A relaxation-projection method for compressible flows. Part II: computation of interfaces and multiphase mixtures with stiff mechanical relaxation. J. Comput. Phys. (submitted for publication)], the method is extended to the numerical approximation of a non-conservative hyperbolic multiphase flow model for interface computation and shock propagation into mixtures
DEFF Research Database (Denmark)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.
2017-01-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT...... matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via ℓ1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate...... and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1...
Haware, Rahul V; Bauer-Brandl, Annette; Tho, Ingunn
2010-01-01
The present work challenges a newly developed approach to tablet formulation development by using chemically identical materials (grades and brands of microcrystalline cellulose). Tablet properties with respect to process and formulation parameters (e.g. compression speed, added lubricant and Emcompress fractions) were evaluated by 2(3)-factorial designs. Tablets of constant true volume were prepared on a compaction simulator at constant pressure (approx. 100 MPa). The highly repeatable and accurate force-displacement data obtained was evaluated by simple 'in-die' Heckel method and work descriptors. Relationships and interactions between formulation, process and tablet parameters were identified and quantified by multivariate analysis techniques; principal component analysis (PCA) and partial least square regressions (PLS). The method proved to be able to distinguish between different grades of MCC and even between two different brands of the same grade (Avicel PH 101 and Vivapur 101). One example of interaction was studied in more detail by mixed level design: The interaction effect of lubricant and Emcompress on elastic recovery of Avicel PH 102 was demonstrated to be complex and non-linear using the development tool under investigation.
Woodie, J B; Ruggles, A J; Litsky, A S
2000-01-01
To evaluate 2 methods of midbody proximal sesamoid bone repair--fixation by a screw placed in lag fashion and circumferential wire fixation--by comparing yield load and the adjacent soft-tissue strain during monotonic loading. Experimental study. 10 paired equine cadaver forelimbs from race-trained horses. A transverse midbody osteotomy of the medial proximal sesamoid bone (PSB) was created. The osteotomy was repaired with a 4.5-mm cortex bone screw placed in lag fashion or a 1.25-mm circumferential wire. The limbs were instrumented with differential variable reluctance transducers placed in the suspensory apparatus and distal sesamoidean ligaments. The limbs were tested in axial compression in a single cycle until failure. The cortex bone screw repairs had a mean yield load of 2,908.2 N; 1 limb did not fail when tested to 5,000 N. All circumferential wire repairs failed with a mean yield load of 3,406.3 N. There was no statistical difference in mean yield load between the 2 repair methods. The maximum strain generated in the soft tissues attached to the proximal sesamoid bones was not significantly different between repair groups. All repaired limbs were able to withstand loads equal to those reportedly applied to the suspensory apparatus in vivo during walking. Each repair technique should have adequate yield strength for repair of midbody fractures of the PSB immediately after surgery.
An oscillation free shock-capturing method for compressible van der Waals supercritical fluid flows
International Nuclear Information System (INIS)
Pantano, C.; Saurel, R.; Schmitt, T.
2017-01-01
Numerical solutions of the Euler equations using real gas equations of state (EOS) often exhibit serious inaccuracies. The focus here is the van der Waals EOS and its variants (often used in supercritical fluid computations). The problems are not related to a lack of convexity of the EOS since the EOS are considered in their domain of convexity at any mesh point and at any time. The difficulties appear as soon as a density discontinuity is present with the rest of the fluid in mechanical equilibrium and typically result in spurious pressure and velocity oscillations. This is reminiscent of well-known pressure oscillations occurring with ideal gas mixtures when a mass fraction discontinuity is present, which can be interpreted as a discontinuity in the EOS parameters. We are concerned with pressure oscillations that appear just for a single fluid each time a density discontinuity is present. As a result, the combination of density in a nonlinear fashion in the EOS with diffusion by the numerical method results in violation of mechanical equilibrium conditions which are not easy to eliminate, even under grid refinement.
Sink-to-Sink Coordination Framework Using RPL: Routing Protocol for Low Power and Lossy Networks
Directory of Open Access Journals (Sweden)
Meer M. Khan
2016-01-01
Full Text Available RPL (Routing Protocol for low power and Lossy networks is recommended by Internet Engineering Task Force (IETF for IPv6-based LLNs (Low Power and Lossy Networks. RPL uses a proactive routing approach and each node always maintains an active path to the sink node. Sink-to-sink coordination defines syntax and semantics for the exchange of any network defined parameters among sink nodes like network size, traffic load, mobility of a sink, and so forth. The coordination allows sink to learn about the network condition of neighboring sinks. As a result, sinks can make coordinated decision to increase/decrease their network size for optimizing over all network performance in terms of load sharing, increasing network lifetime, and lowering end-to-end latency of communication. Currently, RPL does not provide any coordination framework that can define message exchange between different sink nodes for enhancing the network performance. In this paper, a sink-to-sink coordination framework is proposed which utilizes the periodic route maintenance messages issued by RPL to exchange network status observed at a sink with its neighboring sinks. The proposed framework distributes network load among sink nodes for achieving higher throughputs and longer network’s life time.
Receiver-Assisted Congestion Control to Achieve High Throughput in Lossy Wireless Networks
Shi, Kai; Shu, Yantai; Yang, Oliver; Luo, Jiarong
2010-04-01
Many applications would require fast data transfer in high-speed wireless networks nowadays. However, due to its conservative congestion control algorithm, Transmission Control Protocol (TCP) cannot effectively utilize the network capacity in lossy wireless networks. In this paper, we propose a receiver-assisted congestion control mechanism (RACC) in which the sender performs loss-based control, while the receiver is performing delay-based control. The receiver measures the network bandwidth based on the packet interarrival interval and uses it to compute a congestion window size deemed appropriate for the sender. After receiving the advertised value feedback from the receiver, the sender then uses the additive increase and multiplicative decrease (AIMD) mechanism to compute the correct congestion window size to be used. By integrating the loss-based and the delay-based congestion controls, our mechanism can mitigate the effect of wireless losses, alleviate the timeout effect, and therefore make better use of network bandwidth. Simulation and experiment results in various scenarios show that our mechanism can outperform conventional TCP in high-speed and lossy wireless environments.
Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong
2018-03-01
Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.
Urban, K.; Sicakova, A.
2017-10-01
The paper deals with the use of alternative powder additives (fly ash and fine fraction of recycled concrete) to improve the recycled concrete aggregate and this occurs directly in the concrete mixing process. Specific mixing process (triple mixing method) is applied as it is favourable for this goal. Results of compressive strength after 2 and 28 days of hardening are given. Generally, using powder additives for coating the coarse recycled concrete aggregate in the first stage of triple mixing resulted in decrease of compressive strength, comparing the cement. There is no very important difference between samples based on recycled concrete aggregate and those based on natural aggregate as far as the cement is used for coating. When using both the fly ash and recycled concrete powder, the kind of aggregate causes more significant differences in compressive strength, with the values of those based on the recycled concrete aggregate being worse.
DEFF Research Database (Denmark)
Hesthaven, Jan
1997-01-01
This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions and as a res......This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions...... and as a result the patching of subdomains is local in space. The scheme is studied in detail for Burgers's equation and developed for the compressible Navier-Stokes equations in general curvilinear coordinates. The versatility of the proposed scheme for the compressible Navier-Stokes equations is illustrated...
Surawski, Nicholas C; Miljevic, Branka; Bodisco, Timothy A; Brown, Richard J; Ristovski, Zoran D; Ayoko, Godwin A
2013-02-19
Compression ignition (CI) engine design is subject to many constraints, which present a multicriteria optimization problem that the engine researcher must solve. In particular, the modern CI engine must not only be efficient but must also deliver low gaseous, particulate, and life cycle greenhouse gas emissions so that its impact on urban air quality, human health, and global warming is minimized. Consequently, this study undertakes a multicriteria analysis, which seeks to identify alternative fuels, injection technologies, and combustion strategies that could potentially satisfy these CI engine design constraints. Three data sets are analyzed with the Preference Ranking Organization Method for Enrichment Evaluations and Geometrical Analysis for Interactive Aid (PROMETHEE-GAIA) algorithm to explore the impact of (1) an ethanol fumigation system, (2) alternative fuels (20% biodiesel and synthetic diesel) and alternative injection technologies (mechanical direct injection and common rail injection), and (3) various biodiesel fuels made from 3 feedstocks (i.e., soy, tallow, and canola) tested at several blend percentages (20-100%) on the resulting emissions and efficiency profile of the various test engines. The results show that moderate ethanol substitutions (~20% by energy) at moderate load, high percentage soy blends (60-100%), and alternative fuels (biodiesel and synthetic diesel) provide an efficiency and emissions profile that yields the most "preferred" solutions to this multicriteria engine design problem. Further research is, however, required to reduce reactive oxygen species (ROS) emissions with alternative fuels and to deliver technologies that do not significantly reduce the median diameter of particle emissions.
International Nuclear Information System (INIS)
Sheehan, C.
2016-01-01
The incidence of Malignant Spinal Cord Compression (MSCC) is thought to be increasing in the UK due to an aging population and improving cancer survivorship. The impact of such a diagnosis requires emergency treatment. In 2008 the National Institute of Clinical Excellence produced guidelines on the management of MSCC which includes a recommendation to assess spinal instability. However, a lack of guidelines to assess spinal instability in oncology patients is widely acknowledged. This can result in variations in the management of care for such patients. A spinal instability assessment can influence optimum patient care (bed rest or encouraged mobilisation) and inform the best definitive treatment modality (surgery or radiotherapy) for an individual patient. The aim of this systematic review is to attempt to identify a consensus definition of spinal instability and methods by which it can be classified. - Highlights: • A lack of guidance on metastatic spinal instability results in variations of care. • Definitions and assessments for spinal instability are explored in this review. • A Spinal Instability Neoplastic Scoring (SINS) system has been identified. • SINS could potentially be adopted to optimise and standardise patient care.
International Nuclear Information System (INIS)
Greenough, J.A.; Rider, W.J.
2004-01-01
A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the 'peak' shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are
Greenough, J. A.; Rider, W. J.
2004-05-01
A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are
Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images
Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.
2014-03-01
Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.
Images compression in nuclear medicine
International Nuclear Information System (INIS)
Rebelo, M.S.; Furuie, S.S.; Moura, L.
1992-01-01
The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)
DEFF Research Database (Denmark)
Hansen, Troels Vejle; Kim, Oleksiy S.; Breinbjerg, Olav
2014-01-01
For spherical antennas consisting of a solid magnetodielectric lossy core with an impressed surface current density exciting a superposition of the ${\\rm TE}_{mn}$ and ${\\rm TM}_{mn}$ spherical modes, we analytically determine the radiation quality factor $Q$ and radiation efficiency $e$ . Also, we...
Graphene Oxide in Lossy Mode Resonance-Based Optical Fiber Sensors for Ethanol Detection
Directory of Open Access Journals (Sweden)
Miguel Hernaez
2017-12-01
Full Text Available The influence of graphene oxide (GO over the features of an optical fiber ethanol sensor based on lossy mode resonances (LMR has been studied in this work. Four different sensors were built with this aim, each comprising a multimode optical fiber core fragment coated with a SnO2 thin film. Layer by layer (LbL coatings made of 1, 2 and 4 bilayers of polyethyleneimine (PEI and graphene oxide were deposited onto three of these devices and their behavior as aqueous ethanol sensors was characterized and compared with the sensor without GO. The sensors with GO showed much better performance with a maximum sensitivity enhancement of 176% with respect to the sensor without GO. To our knowledge, this is the first time that GO has been used to make an optical fiber sensor based on LMR.
Performance evaluation of a lossy transmission lines based diode detector at cryogenic temperature.
Villa, E; Aja, B; de la Fuente, L; Artal, E
2016-01-01
This work is focused on the design, fabrication, and performance analysis of a square-law Schottky diode detector based on lossy transmission lines working under cryogenic temperature (15 K). The design analysis of a microwave detector, based on a planar gallium-arsenide low effective Schottky barrier height diode, is reported, which is aimed for achieving large input return loss as well as flat sensitivity versus frequency. The designed circuit demonstrates good sensitivity, as well as a good return loss in a wide bandwidth at Ka-band, at both room (300 K) and cryogenic (15 K) temperatures. A good sensitivity of 1000 mV/mW and input return loss better than 12 dB have been achieved when it works as a zero-bias Schottky diode detector at room temperature, increasing the sensitivity up to a minimum of 2200 mV/mW, with the need of a DC bias current, at cryogenic temperature.
Graphene Oxide in Lossy Mode Resonance-Based Optical Fiber Sensors for Ethanol Detection.
Hernaez, Miguel; Mayes, Andrew G; Melendi-Espina, Sonia
2017-12-27
The influence of graphene oxide (GO) over the features of an optical fiber ethanol sensor based on lossy mode resonances (LMR) has been studied in this work. Four different sensors were built with this aim, each comprising a multimode optical fiber core fragment coated with a SnO₂ thin film. Layer by layer (LbL) coatings made of 1, 2 and 4 bilayers of polyethyleneimine (PEI) and graphene oxide were deposited onto three of these devices and their behavior as aqueous ethanol sensors was characterized and compared with the sensor without GO. The sensors with GO showed much better performance with a maximum sensitivity enhancement of 176% with respect to the sensor without GO. To our knowledge, this is the first time that GO has been used to make an optical fiber sensor based on LMR.
Directory of Open Access Journals (Sweden)
Ari Shawakat Tahir
2015-12-01
Full Text Available The Steganography is an art and science of hiding information by embedding messages within other, seemingly harmless messages and lots of researches are working in it. Proposed system is using AES Algorithm and Lossy technique to overcome the limitation of previous work and increasing the process’s speed. The sender uses AES Algorithm to encrypt message and image, then using LSB technique to hide encrypted data in encrypted message. The receive get the original data using the keys that had been used in encryption process. The proposed system has been implemented in NetBeans 7.3 software uses image and data in different size to find the system’s speed.
Enhanced inertia from lossy effective fluids using multi-scale sonic crystals
Directory of Open Access Journals (Sweden)
Matthew D. Guild
2014-12-01
Full Text Available In this work, a recent theoretically predicted phenomenon of enhanced permittivity with electromagnetic waves using lossy materials is investigated for the analogous case of mass density and acoustic waves, which represents inertial enhancement. Starting from fundamental relationships for the homogenized quasi-static effective density of a fluid host with fluid inclusions, theoretical expressions are developed for the conditions on the real and imaginary parts of the constitutive fluids to have inertial enhancement, which are verified with numerical simulations. Realizable structures are designed to demonstrate this phenomenon using multi-scale sonic crystals, which are fabricated using a 3D printer and tested in an acoustic impedance tube, yielding good agreement with the theoretical predictions and demonstrating enhanced inertia.
Ohlhorst, Craig W.; Sawyer, James Wayne; Yamaki, Y. Robert
1989-01-01
An experimental evaluation has been conducted to ascertain the the usefulness of two techniques for measuring in-plane compressive failure strength and modulus in coated and uncoated carbon-carbon composites. The techniques involved testing specimens with potted ends as well as testing them in a novel clamping fixture; specimen shape, length, gage width, and thickness were the test parameters investigated for both coated and uncoated 0/90 deg and +/-45 deg laminates. It is found that specimen shape does not have a significant effect on the measured compressive properties. The potting of specimen ends results in slightly higher measured compressive strengths than those obtained with the new clamping fixture. Comparable modulus values are obtained by both techniques.
Mitrofanov, O.; Pavelko, I.; Varickis, S.; Vagele, A.
2018-03-01
The necessity for considering both strength criteria and postbuckling effects in calculating the load-carrying capacity in compression of thin-wall composite structures with impact damage is substantiated. An original applied method ensuring solution of these problems with an accuracy sufficient for practical design tasks is developed. The main advantage of the method is its applicability in terms of computing resources and the set of initial data required. The results of application of the method to solution of the problem of compression of fragments of thin-wall honeycomb panel damaged by impacts of various energies are presented. After a comparison of calculation results with experimental data, a working algorithm for calculating the reduction in the load-carrying capacity of a composite object with impact damage is adopted.
New Methods of Stereo Encoding for FM Radio Broadcasting Based on Digital Technology
Directory of Open Access Journals (Sweden)
P. Stranak
2007-12-01
Full Text Available The article describes new methods of stereo encoding for FM radio broadcasting. Digital signal processing makes possible to construct an encoder with properties that are not attainable using conventional analog solutions. The article describes the mathematical model of the encoder, on the basis of which a specific program code for DSP was developed. The article further deals with a new method of composite clipping which does not cause impurities in the output spectrum, and at the same time preserves high separation between the left and right audio channels. The application of the new method is useful mainly where there are unwanted signal overshoots on the input of the stereo encoder, e.g., in case of signal transmission from the studio to the transmitter site through a route with psychoacoustic lossy compression of data rate.
Teżyk, Michał; Jakubowska, Emilia; Milczewska, Kasylda; Milanowski, Bartłomiej; Voelkel, Adam; Lulek, Janina
2017-06-01
The aim of this article is to compare the gravitational powder blend loading method to the tablet press and manual loading in terms of their influence on tablets' critical quality attributes (CQA). The results of the study can be of practical relevance to the pharmaceutical industry in the area of direct compression of low-dose formulations, which could be prone to content uniformity (CU) issues. In the preliminary study, particle size distribution (PSD) and surface energy of raw materials were determined using laser diffraction method and inverse gas chromatography, respectively. For trials purpose, a formulation containing two pharmaceutical ingredients (APIs) was used. Tablet samples were collected during the compression progress to analyze their CQAs, namely assay and CU. Results obtained during trials indicate that tested direct compression powder blend is sensitive to applied powder handling method. Mild increase in both APIs content was observed during manual scooping. Gravitational approach (based on discharge into the drum) resulted in a decrease in CU, which is connected to a more pronounced assay increase at the end of tableting than in the case of manual loading. The correct design of blend transfer over single unit processes is an important issue and should be investigated during the development phase since it may influence the final product CQAs. The manual scooping method, although simplistic, can be a temporary solution to improve the results of API's content and uniformity when compared to industrial gravitational transfer.
Cheng, Jian; Yue, Huiqiang; Yu, Shengjiao; Liu, Tiegang
2018-06-01
In this paper, an adjoint-based high-order h-adaptive direct discontinuous Galerkin method is developed and analyzed for the two dimensional steady state compressible Navier-Stokes equations. Particular emphasis is devoted to the analysis of the adjoint consistency for three different direct discontinuous Galerkin discretizations: including the original direct discontinuous Galerkin method (DDG), the direct discontinuous Galerkin method with interface correction (DDG(IC)) and the symmetric direct discontinuous Galerkin method (SDDG). Theoretical analysis shows the extra interface correction term adopted in the DDG(IC) method and the SDDG method plays a key role in preserving the adjoint consistency. To be specific, for the model problem considered in this work, we prove that the original DDG method is not adjoint consistent, while the DDG(IC) method and the SDDG method can be adjoint consistent with appropriate treatment of boundary conditions and correct modifications towards the underlying output functionals. The performance of those three DDG methods is carefully investigated and evaluated through typical test cases. Based on the theoretical analysis, an adjoint-based h-adaptive DDG(IC) method is further developed and evaluated, numerical experiment shows its potential in the applications of adjoint-based adaptation for simulating compressible flows.
Xu, Bowen; Zhang, Qingsong; An, Siqi; Pei, Baorui; Wu, Xiaobo
2017-08-01
To establish the model of compression fracture of acetabular dome, and to measure the contact characteristics of acetabular weight-bearing area of acetabulum after 3 kinds of internal fixation. Sixteen fresh adult half pelvis specimens were randomly divided into 4 groups, 4 specimens each group. Group D was the complete acetabulum (control group), and the remaining 3 groups were prepared acetabular dome compression fracture model. The fractures were fixed with reconstruction plate in group A, antegrade raft screws in group B, and retrograde raft screws in group C. The pressure sensitive films were attached to the femoral head, and the axial compression test was carried out on the inverted single leg standing position. The weight-bearing area, average stress, and peak stress were measured in each group. Under the loading of 500 N, the acetabular weight-bearing area was significantly higher in group D than in other 3 groups ( P area were significantly higher in group B and group C than in group A, and the average stress and peak stress were significantly lower than in group A ( P 0.05). For the compression fracture of the acetabular dome, the contact characteristics of the weight-bearing area can not restore to the normal level, even if the anatomical reduction and rigid internal fixation were performed; compared with the reconstruction plate fixation, antegrade and retrograde raft screws fixations can increase the weight-bearing area, reduce the average stress and peak stress, and reduce the incidence of traumatic arthritis.
Energy Technology Data Exchange (ETDEWEB)
Costa, Gustavo Koury
2004-11-15
Although incompressible fluid flows can be regarded as a particular case of a general problem, numerical methods and the mathematical formulation aimed to solve compressible and incompressible flows have their own peculiarities, in such a way, that it is generally not possible to attain both regimes with a single approach. In this work, we start from a typically compressible formulation, slightly modified to make use of pressure variables and, through augmenting the stabilising parameters, we end up with a simplified model which is able to deal with a wide range of flow regimes, from supersonic to low speed gas flows. The resulting methodology is flexible enough to allow for the simulation of liquid flows as well. Examples using conservative and pressure variables are shown and the results are compared to those published in the literature, in order to validate the method. (author)
The impact of chest compression rates on quality of chest compressions : a manikin study
Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.
2012-01-01
Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...
Discrete Wigner Function Reconstruction and Compressed Sensing
Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin
2011-01-01
A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.
Directory of Open Access Journals (Sweden)
Zasadzka E
2018-05-01
Full Text Available Ewa Zasadzka,1 Tomasz Trzmiel,1 Maria Kleczewska,2 Mariola Pawlaczyk1 1Department of Geriatric Medicine and Gerontology, Karol Marcinkowski University of Medical Sciences, Poznan, Poland; 2Day Rehabilitation Center, Hospicjum Palium, Poznań, Poland Background: Lymphedema is a chronic condition which significantly lowers the quality of patient life, particularly among elderly populations, whose mobility and physical function are often reduced. Objectives: The aim of the study was to compare the effectiveness of multi-layer compression bandaging (MCB and complex decongestive therapy (CDT, and to show that MCB is a cheaper, more accessible and less labor intensive method of treating lymphedema in elderly patients. Patients and methods: The study included 103 patients (85 women and 18 men aged ≥60 years, with unilateral lower limb lymphedema. The subjects were divided into two groups: 50 treated with CDT and 53 with MCB. Pre- and post-treatment BMI, and average and maximum circumference of the edematous extremities were analyzed. Results: Reduction in swelling in both groups was achieved after 15 interventions. Both therapies demonstrated similar efficacy in reducing limb volume and circumference, but MCB showed greater efficacy in reducing the maximum circumference. Conclusion: Compression bandaging is a vital component of CDT. Maximum lymphedema reduction during therapy and maintaining its effect cannot be achieved without it. It also demonstrates its effectiveness as an independent method, which can reduce therapy cost and accessibility. Keywords: lymphedema, elderly, therapy, compression bandaging
Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)
Schmalz, Tyler; Ryan, Jack
2011-01-01
Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.
Huimerind, Jaak, 1957-
2015-01-01
Eesti Ajaloomuuseumi Maarjamäe lossi renoveeritud tallihoone Tallinnas Pirita tee 66, valminud 2014. Arhitekt Jaak Huimerind (Studio Paralleel OÜ), sisearhitekt ja näituse kujundaja Tarmo Piirmets (Pink OÜ). Eesti Kultuurkapitali Arhitektuuri sihtkapitali renoveerimispreemia 2014
Energy Technology Data Exchange (ETDEWEB)
Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)
2012-07-01
Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)
Evaluation of a new image compression technique
International Nuclear Information System (INIS)
Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.
1988-01-01
The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen
Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian
2017-09-01
Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.
Parsani, Matteo
2016-10-04
Staggered grid, entropy stable discontinuous spectral collocation operators of any order are developed for the compressible Euler and Navier--Stokes equations on unstructured hexahedral elements. This generalization of previous entropy stable spectral collocation work [M. H. Carpenter, T. C. Fisher, E. J. Nielsen, and S. H. Frankel, SIAM J. Sci. Comput., 36 (2014), pp. B835--B867, M. Parsani, M. H. Carpenter, and E. J. Nielsen, J. Comput. Phys., 292 (2015), pp. 88--113], extends the applicable set of points from tensor product, Legendre--Gauss--Lobatto (LGL), to a combination of tensor product Legendre--Gauss (LG) and LGL points. The new semidiscrete operators discretely conserve mass, momentum, energy, and satisfy a mathematical entropy inequality for the compressible Navier--Stokes equations in three spatial dimensions. They are valid for smooth as well as discontinuous flows. The staggered LG and conventional LGL point formulations are compared on several challenging test problems. The staggered LG operators are significantly more accurate, although more costly from a theoretical point of view. The LG and LGL operators exhibit similar robustness, as is demonstrated using test problems known to be problematic for operators that lack a nonlinear stability proof for the compressible Navier--Stokes equations (e.g., discontinuous Galerkin, spectral difference, or flux reconstruction operators).
Directory of Open Access Journals (Sweden)
Wonsuk Jung
2017-01-01
Full Text Available This paper investigates the effect of the high-temperature curing methods on the compressive strength of concrete containing high volumes of ground granulated blast-furnace slag (GGBS. GGBS was used to replace Portland cement at a replacement ratio of 60% by binder mass. The high-temperature curing parameters used in this study were the delay period, temperature rise, peak temperature (PT, peak period, and temperature down. Test results demonstrate that the compressive strength of the samples with PTs of 65°C and 75°C was about 88% higher than that of the samples with a PT of 55°C after 1 day. According to this investigation, there might be optimum high-temperature curing conditions for preparing a concrete containing high volumes of GGBS, and incorporating GGBS into precast concrete mixes can be a very effective tool in increasing the applicability of this by-product.
Directory of Open Access Journals (Sweden)
Alberto Apostolico
2009-08-01
Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.
Comparative data compression techniques and multi-compression results
International Nuclear Information System (INIS)
Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H
2013-01-01
Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms
Directory of Open Access Journals (Sweden)
Jerry D. Gibson
2016-06-01
Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.
Directory of Open Access Journals (Sweden)
Jijian Lian
2017-05-01
Full Text Available Better understanding of the complex mechanical properties of ice is the foundation to predict the ice fail process and avoid potential ice threats. In the present study, uniaxial compressive strength and fracture mode of natural lake ice are investigated over moderate strain-rate range of 0.4–10 s−1 at −5 °C and −10 °C. The digital speckle correlation method (DSCM is used for deformation measurement through constructing artificial speckle on ice sample surface in advance, and two dynamic load cells are employed to measure the dynamic load for monitoring the equilibrium of two ends’ forces under high-speed loading. The relationships between uniaxial compressive strength and strain-rate, temperature, loading direction, and air porosity are investigated, and the fracture mode of ice at moderate rates is also discussed. The experimental results show that there exists a significant difference between true strain-rate and nominal strain-rate derived from actuator displacement under dynamic loading conditions. Over the employed strain-rate range, the dynamic uniaxial compressive strength of lake ice shows positive strain-rate sensitivity and decreases with increasing temperature. Ice obtains greater strength values when it is with lower air porosity and loaded vertically. The fracture mode of ice seems to be a combination of splitting failure and crushing failure.
Generalized massive optimal data compression
Alsing, Justin; Wandelt, Benjamin
2018-05-01
In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.
International Nuclear Information System (INIS)
Li, X.L.
1993-01-01
Computation of three-dimensional (3-D) Rayleigh--Taylor instability in compressible fluids is performed on a MIMD computer. A second-order TVD scheme is applied with a fully parallelized algorithm to the 3-D Euler equations. The computational program is implemented for a 3-D study of bubble evolution in the Rayleigh--Taylor instability with varying bubble aspect ratio and for large-scale simulation of a 3-D random fluid interface. The numerical solution is compared with the experimental results by Taylor
Compression of TPC data in the ALICE experiment
International Nuclear Information System (INIS)
Nicolaucig, A.; Mattavelli, M.; Carrato, S.
2002-01-01
In this paper two algorithms for the compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN are described. The first algorithm is based on a lossless source code modeling technique, i.e. the original TPC signal information can be reconstructed without errors at the decompression stage. The source model exploits the temporal correlation that is present in the TPC data to reduce the entropy of the source. The second algorithm is based on a source model which is lossy if samples of the TPC signal are considered one by one. Conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse. Obviously entropy coding is applied to the set of events defined by the two source models to reduce the bit rate to the corresponding source entropy. Using TPC simulated data according to the expected ALICE TPC performance, the lossless and the lossy compression algorithms achieve a data reduction, respectively, to 49.2% and in the range of 34.2% down to 23.7% of the original data rate. The number of operations per input symbol required to implement the compression stage for both algorithms is relatively low, so that a real-time implementation embedded in the TPC data acquisition chain using low-cost integrated electronics is a realistic option to effectively reduce the data storing cost of ALICE experiment
Kosiel, Kamil; Koba, Marcin; Masiewicz, Marcin; Śmietana, Mateusz
2018-06-01
The paper shows application of atomic layer deposition (ALD) technique as a tool for tailoring sensorial properties of lossy-mode-resonance (LMR)-based optical fiber sensors. Hafnium dioxide (HfO2), zirconium dioxide (ZrO2), and tantalum oxide (TaxOy), as high-refractive-index dielectrics that are particularly convenient for LMR-sensor fabrication, were deposited by low-temperature (100 °C) ALD ensuring safe conditions for thermally vulnerable fibers. Applicability of HfO2 and ZrO2 overlays, deposited with ALD-related atomic level thickness accuracy for fabrication of LMR-sensors with controlled sensorial properties was presented. Additionally, for the first time according to our best knowledge, the double-layer overlay composed of two different materials - silicon nitride (SixNy) and TaxOy - is presented for the LMR fiber sensors. The thin films of such overlay were deposited by two different techniques - PECVD (the SixNy) and ALD (the TaxOy). Such approach ensures fast overlay fabrication and at the same time facility for resonant wavelength tuning, yielding devices with satisfactory sensorial properties.
Delay reduction in lossy intermittent feedback for generalized instantly decodable network coding
Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Ai-Naffouri, Tareq Y.
2013-01-01
In this paper, we study the effect of lossy intermittent feedback loss events on the multicast decoding delay performance of generalized instantly decodable network coding. These feedback loss events create uncertainty at the sender about the reception statues of different receivers and thus uncertainty to accurately determine subsequent instantly decodable coded packets. To solve this problem, we first identify the different possibilities of uncertain packets at the sender and their probabilities. We then derive the expression of the mean decoding delay. We formulate the Generalized Instantly Decodable Network Coding (G-IDNC) minimum decoding delay problem as a maximum weight clique problem. Since finding the optimal solution is NP-hard, we design a variant of the algorithm employed in [1]. Our algorithm is compared to the two blind graph update proposed in [2] through extensive simulations. Results show that our algorithm outperforms the blind approaches in all the situations and achieves a tolerable degradation, against the perfect feedback, for large feedback loss period. © 2013 IEEE.
Delay reduction in lossy intermittent feedback for generalized instantly decodable network coding
Douik, Ahmed S.
2013-10-01
In this paper, we study the effect of lossy intermittent feedback loss events on the multicast decoding delay performance of generalized instantly decodable network coding. These feedback loss events create uncertainty at the sender about the reception statues of different receivers and thus uncertainty to accurately determine subsequent instantly decodable coded packets. To solve this problem, we first identify the different possibilities of uncertain packets at the sender and their probabilities. We then derive the expression of the mean decoding delay. We formulate the Generalized Instantly Decodable Network Coding (G-IDNC) minimum decoding delay problem as a maximum weight clique problem. Since finding the optimal solution is NP-hard, we design a variant of the algorithm employed in [1]. Our algorithm is compared to the two blind graph update proposed in [2] through extensive simulations. Results show that our algorithm outperforms the blind approaches in all the situations and achieves a tolerable degradation, against the perfect feedback, for large feedback loss period. © 2013 IEEE.
Temporal compressive sensing systems
Reed, Bryan W.
2017-12-12
Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.
Yeh, Pen-Shu (Inventor)
1998-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
Cheng, Jian; Zhang, Fan; Liu, Tiegang
2018-06-01
In this paper, a class of new high order reconstructed DG (rDG) methods based on the compact least-squares (CLS) reconstruction [23,24] is developed for simulating the two dimensional steady-state compressible flows on hybrid grids. The proposed method combines the advantages of the DG discretization with the flexibility of the compact least-squares reconstruction, which exhibits its superior potential in enhancing the level of accuracy and reducing the computational cost compared to the underlying DG methods with respect to the same number of degrees of freedom. To be specific, a third-order compact least-squares rDG(p1p2) method and a fourth-order compact least-squares rDG(p2p3) method are developed and investigated in this work. In this compact least-squares rDG method, the low order degrees of freedom are evolved through the underlying DG(p1) method and DG(p2) method, respectively, while the high order degrees of freedom are reconstructed through the compact least-squares reconstruction, in which the constitutive relations are built by requiring the reconstructed polynomial and its spatial derivatives on the target cell to conserve the cell averages and the corresponding spatial derivatives on the face-neighboring cells. The large sparse linear system resulted by the compact least-squares reconstruction can be solved relatively efficient when it is coupled with the temporal discretization in the steady-state simulations. A number of test cases are presented to assess the performance of the high order compact least-squares rDG methods, which demonstrates their potential to be an alternative approach for the high order numerical simulations of steady-state compressible flows.
Energy Technology Data Exchange (ETDEWEB)
Sandford, M.T. II; Bradley, J.N.; Handel, T.G.
1996-06-01
Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in Microsoft{reg_sign} bitmap (.BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits, is termed {open_quote}steganography.{close_quote} Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or {open_quote}lossy{close_quote} compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is in data an analysis algorithm.
Hess, Robert V; Gardner, Clifford S
1947-01-01
By using the Prandtl-Glauert method that is valid for three-dimensional flow problems, the value of the maximum incremental velocity for compressible flow about thin ellipsoids at zero angle of attack is calculated as a function of the Mach number for various aspect ratios and thickness ratios. The critical Mach numbers of the various ellipsoids are also determined. The results indicate an increase in critical Mach number with decrease in aspect ratio which is large enough to explain experimental results on low-aspect-ratio wings at zero lift.
Mattsson, Thomas R.; Jones, Reese; Ward, Donald; Spataru, Catalin; Shulenburger, Luke; Benedict, Lorin X.
2015-06-01
Window materials are ubiquitous in shock physics and with high energy density drivers capable of reaching multi-Mbar pressures the use of LiF is increasing. Velocimetry and temperature measurements of a sample through a window are both influenced by the assumed index of refraction and thermal conductivity, respectively. We report on calculations of index of refraction using the many-body theory GW and thermal ionic conductivity using linear response theory and model potentials. The results are expected to increase the accuracy of a broad range of high-pressure shock- and ramp compression experiments. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
International Nuclear Information System (INIS)
Uchibori, Akihiro; Ohshima, Hiroyuki; Watanabe, Akira
2010-01-01
SERAPHIM is a computer program for the simulation of the compressible multiphase flow involving the sodium-water chemical reaction under a tube failure accident in a steam generator of sodium cooled fast reactors. In this study, the numerical analysis of the highly underexpanded air jets into the air or into the water was performed as a part of validation of the SERAPHIM program. The multi-fluid model, the second-order TVD scheme and the HSMAC method considering a compressibility were used in this analysis. Combining these numerical methods makes it possible to calculate the multiphase flow including supersonic gaseous jets. In the case of the air jet into the air, the calculated pressure, the shape of the jet and the location of a Mach disk agreed with the existing experimental results. The effect of the difference scheme and the mesh resolution on the prediction accuracy was clarified through these analyses. The behavior of the air jet into the water was also reproduced successfully by the proposed numerical method. (author)
Compressing Data Cube in Parallel OLAP Systems
Directory of Open Access Journals (Sweden)
Frank Dehne
2007-03-01
Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.
Computer calculations of compressibility of natural gas
Energy Technology Data Exchange (ETDEWEB)
Abou-Kassem, J.H.; Mattar, L.; Dranchuk, P.M
An alternative method for the calculation of pseudo reduced compressibility of natural gas is presented. The method is incorporated into the routines by adding a single FORTRAN statement before the RETURN statement. The method is suitable for computer and hand-held calculator applications. It produces the same reduced compressibility as other available methods but is computationally superior. Tabular definitions of coefficients and comparisons of predicted pseudo reduced compressibility using different methods are presented, along with appended FORTRAN subroutines. 7 refs., 2 tabs.
Efficient transmission of compressed data for remote volume visualization.
Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S
2006-09-01
One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.
Wavelet compression algorithm applied to abdominal ultrasound images
International Nuclear Information System (INIS)
Lin, Cheng-Hsun; Pan, Su-Feng; LU, Chin-Yuan; Lee, Ming-Che
2006-01-01
We sought to investigate acceptable compression ratios of lossy wavelet compression on 640 x 480 x 8 abdominal ultrasound (US) images. We acquired 100 abdominal US images with normal and abnormal findings from the view station of a 932-bed teaching hospital. The US images were then compressed at quality factors (QFs) of 3, 10, 30, and 50 followed outcomes of a pilot study. This was equal to the average compression ratios of 4.3:1, 8.5:1, 20:1 and 36.6:1, respectively. Four objective measurements were carried out to examine and compare the image degradation between original and compressed images. Receiver operating characteristic (ROC) analysis was also introduced for subjective assessment. Five experienced and qualified radiologists as reviewers blinded to corresponding pathological findings, analysed paired 400 randomly ordered images with two 17-inch thin film transistor/liquid crystal display (TFT/LCD) monitors. At ROC analysis, the average area under curve (Az) for US abdominal image was 0.874 at the ratio of 36.6:1. The compressed image size was only 2.7% for US original at this ratio. The objective parameters showed the higher the mean squared error (MSE) or root mean squared error (RMSE) values, the poorer the image quality. The higher signal-to-noise ratio (SNR) or peak signal-to-noise ratio (PSNR) values indicated better image quality. The average RMSE, PSNR at 36.6:1 for US were 4.84 ± 0.14, 35.45 dB, respectively. This finding suggests that, on the basis of the patient sample, wavelet compression of abdominal US to a ratio of 36.6:1 did not adversely affect diagnostic performance or evaluation error for radiologists' interpretation so as to risk affecting diagnosis
Kulinowski, Piotr; Woyna-Orlewicz, Krzysztof; Obrał, Jadwiga; Rappen, Gerd-Martin; Haznar-Garbacz, Dorota; Węglarz, Władysław P; Jachowicz, Renata; Wyszogrodzka, Gabriela; Klaja, Jolanta; Dorożyński, Przemysław P
2016-02-29
The purpose of the research was to investigate the effect of the manufacturing process of the controlled release hydrophilic matrix tablets on their hydration behavior, internal structure and drug release. Direct compression (DC) quetiapine hemifumarate matrices and matrices made of powders obtained by dry granulation (DG) and high shear wet granulation (HS) were prepared. They had the same quantitative composition and they were evaluated using X-ray microtomography, magnetic resonance imaging and biorelevant stress test dissolution. Principal results concerned matrices after 2 h of hydration: (i) layered structure of the DC and DG hydrated tablets with magnetic resonance image intensity decreasing towards the center of the matrix was observed, while in HS matrices layer of lower intensity appeared in the middle of hydrated part; (ii) the DC and DG tablets retained their core and consequently exhibited higher resistance to the physiological stresses during simulation of small intestinal passage than HS formulation. Comparing to DC, HS granulation changed properties of the matrix in terms of hydration pattern and resistance to stress in biorelevant dissolution apparatus. Dry granulation did not change these properties-similar hydration pattern and dissolution in biorelevant conditions were observed for DC and DG matrices. Copyright © 2015 Elsevier B.V. All rights reserved.
Suzuki, Masao; Aiba, Masayuki; Takahashi, Noriyuki; Ota, Satoru; Okada, Shigenori
In a magnetically levitated transportation (MAGLEV) system, a huge number of ground coils will be required because they must be laid for the whole line. Therefore, stable performance and reduced cost are essential requirements for the ground coil development. On the other hand, because the magnetic field changes when the superconducting magnet passes by, an eddy current will be generated in the conductor of the ground coil and will result in energy loss. The loss not only increases the magnetic resistance for the train running but also brings an increase in the ground coil temperature. Therefore, the reduction of the eddy current loss is extremely important. This study examined ground coils in which both the eddy current loss and temperature increase were small. Furthermore, quantitative comparison for the eddy current loss of various magnet wire samples was performed by bench test. On the basis of the comparison, a round twisted wire having low eddy current loss was selected as an effective ground coil material. In addition, the ground coils were manufactured on trial. A favorable outlook to improve the size accuracy of the winding coil and uneven thickness of molded resin was obtained without reducing the insulation strength between the coil layers by applying a compression molding after winding.
Considerations and Algorithms for Compression of Sets
DEFF Research Database (Denmark)
Larsson, Jesper
We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....
DEFF Research Database (Denmark)
Hansen, Troels Vejle; Kim, Oleksiy S.; Breinbjerg, Olav
2014-01-01
For a spherical antenna exciting any arbitrary spherical mode, we derive exact closed-form expressions for the dissipated power and stored energy inside (and outside) the lossy magneto-dielectric spherical core, as well as the radiated power, radiation efficiency, and thus the radiation quality...... an increasing magnetic loss tangent initially leads to a decreasing radiation quality factor, but in the limit of a perfect magnetic conductor (PMC) core the dissipated power tends to zero and the radiation quality factor reaches the fundamental Chu lower bound....
DEFF Research Database (Denmark)
Pedersen, Jesper Goor; Xiao, Sanshui; Mortensen, Niels Asger
2008-01-01
Slow-light enhanced absorption in liquid-infiltrated photonic crystals has recently been proposed as a route to compensate for the reduced optical path in typical lab-on-a-chip systems for bio-chemical sensing applications. A simple perturbative expression has been applied to ideal structures...... composed of lossless dielectrics. In this work we study the enhancement in structures composed of lossy dielectrics such as a polymer. For this particular sensing application we find that the material loss has an unexpected limited drawback and surprisingly, it may even add to increase the bandwidth...
Streaming Compression of Hexahedral Meshes
Energy Technology Data Exchange (ETDEWEB)
Isenburg, M; Courbet, C
2010-02-03
We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.
Chang, Yin-Jung; Lai, Chi-Sheng
2013-09-01
The mismatch in film thickness and incident angle between reflectance and transmittance extrema due to the presence of lossy film(s) is investigated toward the maximum transmittance design in the active region of solar cells. Using a planar air/lossy film/silicon double-interface geometry illustrates important and quite opposite mismatch behaviors associated with TE and TM waves. In a typical thin-film CIGS solar cell, mismatches contributed by TM waves in general dominate. The angular mismatch is at least 10° in about 37%-53% of the spectrum, depending on the thickness combination of all lossy interlayers. The largest thickness mismatch of a specific interlayer generally increases with the thickness of the layer itself. Antireflection coating designs for solar cells should therefore be optimized in terms of the maximum transmittance into the active region, even if the corresponding reflectance is not at its minimum.
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...
Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics
Directory of Open Access Journals (Sweden)
Daniel Laney
2014-01-01
Full Text Available This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. We compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.
Context-Aware Image Compression.
Directory of Open Access Journals (Sweden)
Jacky C K Chan
Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.
Directory of Open Access Journals (Sweden)
Yibo Chen
2015-08-01
Full Text Available In recent years, IoT (Internet of Things technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL, which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.
Chen, Yibo; Chanet, Jean-Pierre; Hou, Kun-Mean; Shi, Hongling; de Sousa, Gil
2015-08-10
In recent years, IoT (Internet of Things) technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs) can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs) complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF) that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL) was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.
Lossy and retardation effects on the localization of EM waves using a left-handed medium slab
International Nuclear Information System (INIS)
Cheng Qiang; Cui Tiejun; Lu Weibing
2005-01-01
It has been shown that a left-handed medium (LHM) slab with negative permittivity -ε0 and negative permeability -μ0 can be used to localize electromagnetic waves [T.J. Cui et al., Phys. Rev. B (January 2005)]. If two current sources with the same amplitudes and opposite directions are placed at the perfect-imaging points of the LHM slab, we have shown that all electromagnetic waves are completely confined in a region between the two sources. In this Letter, a slightly mismatched and lossy LHM lens is studied, where both the relative permittivity and permeability are slightly different from -1, and the lossy and retardation effects on the electromagnetic-wave localization are investigated. Due to the loss and retardation, strong surface waves exist along the slab surfaces. When two current sources are located at the perfect imaging points symmetrically, we show that electromagnetic waves are nearly confined in the region between the two sources and few energies are radiated outside if the retardation and loss are small. When the loss becomes larger, more energies will flow out of the region. Numerical experiments are given to illustrate the above conclusions
Directory of Open Access Journals (Sweden)
Shun Takahashi
2014-01-01
Full Text Available A computational code adopting immersed boundary methods for compressible gas-particle multiphase turbulent flows is developed and validated through two-dimensional numerical experiments. The turbulent flow region is modeled by a second-order pseudo skew-symmetric form with minimum dissipation, while the monotone upstream-centered scheme for conservation laws (MUSCL scheme is employed in the shock region. The present scheme is applied to the flow around a two-dimensional cylinder under various freestream Mach numbers. Compared with the original MUSCL scheme, the minimum dissipation enabled by the pseudo skew-symmetric form significantly improves the resolution of the vortex generated in the wake while retaining the shock capturing ability. In addition, the resulting aerodynamic force is significantly improved. Also, the present scheme is successfully applied to moving two-cylinder problems.
Advances in compressible turbulent mixing
International Nuclear Information System (INIS)
Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.
1992-01-01
This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately
Advances in compressible turbulent mixing
Energy Technology Data Exchange (ETDEWEB)
Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.
1992-01-01
This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.
Douvanas, Alexandros; Koulouglioti, Christina; Kalafati, Maria
2018-03-01
The quality of chest compression (CC) delivered during neonatal and infant cardiopulmonary resuscitation (CPR) is identified as the most important factor to achieve the increase of survival rate without major neurological deficit to the patients. The objective of the study was to systematically review all the available studies that have compared the two different techniques of hand placement on infants and neonatal resuscitation, from 2010 to 2015 and to highlight which method is more effective. A review of the literature using a variety of medical databases, including Cochrane, MEDLINE, and SCOPUS electronic databases. The following MeSH terms were used in the search: infant, neonatal, CPR, CC, two-thumb (TT) technique/method, two-finger (TF) technique/method, rescuer fatigue, thumb/finger position/placement, as well as combinations of these. Ten studies met the inclusion criteria; nine observational studies and a randomized controlled trial. All providers performed either continuous TF or TT technique CCs and the majority of CPR performance was taken place in infant trainer manikin. The majority of the studies suggest the TT method as the more useful for infants and neonatal resuscitation than the TF.
Toroody, Ahmad Bahoo; Abaei, Mohammad Mahdy; Gholamnia, Reza
2016-12-01
Risk assessment can be classified into two broad categories: traditional and modern. This paper is aimed at contrasting the functional resonance analysis method (FRAM) as a modern approach with the fault tree analysis (FTA) as a traditional method, regarding assessing the risks of a complex system. Applied methodology by which the risk assessment is carried out, is presented in each approach. Also, FRAM network is executed with regard to nonlinear interaction of human and organizational levels to assess the safety of technological systems. The methodology is implemented for lifting structures deep offshore. The main finding of this paper is that the combined application of FTA and FRAM during risk assessment, could provide complementary perspectives and may contribute to a more comprehensive understanding of an incident. Finally, it is shown that coupling a FRAM network with a suitable quantitative method will result in a plausible outcome for a predefined accident scenario.
Czech Academy of Sciences Publication Activity Database
Feireisl, Eduard; Hošek, Radim; Maltese, D.; Novotný, A.
2017-01-01
Roč. 51, č. 1 (2017), s. 279-319 ISSN 0764-583X EU Projects: European Commission(XE) 320078 - MATHEF Institutional support: RVO:67985840 Keywords : Navier-Stokes system * finite element numerical method * finite volume numerical method Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.727, year: 2016 http://www.esaim-m2an.org/ articles /m2an/abs/2017/01/m2an150157/m2an150157.html
International Nuclear Information System (INIS)
1998-10-01
DoD adopted. This test method is under the jurisdiction of ASTM Committee C-9 on Concrete and Concrete Aggregates and is the direct responsibility of Subcommittee C09.41 on Concrete for Radiation Shielding. Current edition approved Feb. 10, 1986 and published October 1998. Originally published as C 942-81. Last previous edition was C 942-86(1991)
Comparing biological networks via graph compression
Directory of Open Access Journals (Sweden)
Hayashida Morihiro
2010-09-01
Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.
JPEG and wavelet compression of ophthalmic images
Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.
1999-05-01
This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.
Directory of Open Access Journals (Sweden)
M. Peng
2017-07-01
Full Text Available The multi-source DEMs generated using the images acquired in the descent and landing phase and after landing contain supplementary information, and this makes it possible and beneficial to produce a higher-quality DEM through fusing the multi-scale DEMs. The proposed fusion method consists of three steps. First, source DEMs are split into small DEM patches, then the DEM patches are classified into a few groups by local density peaks clustering. Next, the grouped DEM patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP algorithm is used to achieve sparse representation. We use the real DEMs generated from Chang’e-3 descent images and navigation camera (Navcam stereo images to validate the proposed method. Through the experiments, we have reconstructed a seamless DEM with the highest resolution and the largest spatial coverage among the input data. The experimental results demonstrated the feasibility of the proposed method.
Saleem, M. Rehan; Ali, Ishtiaq; Qamar, Shamsul
2018-03-01
In this article, a reduced five-equation two-phase flow model is numerically investigated. The formulation of the model is based on the conservation and energy exchange laws. The model is non-conservative and the governing equations contain two equations for the mass conservation, one for the over all momentum and one for the total energy. The fifth equation is the energy equation for one of the two phases that includes a source term on the right hand side for incorporating energy exchange between the two fluids in the form of mechanical and thermodynamical works. A Runge-Kutta discontinuous Galerkin finite element method is applied to solve the model equations. The main attractive features of the proposed method include its formal higher order accuracy, its nonlinear stability, its ability to handle complicated geometries, and its ability to capture sharp discontinuities or strong gradients in the solutions without producing spurious oscillations. The proposed method is robust and well suited for large-scale time-dependent computational problems. Several case studies of two-phase flows are presented. For validation and comparison of the results, the same model equations are also solved by using a staggered central scheme. It was found that discontinuous Galerkin scheme produces better results as compared to the staggered central scheme.
KungFQ: a simple and powerful approach to compress fastq files.
Grassi, Elena; Di Gregorio, Federico; Molineris, Ivan
2012-01-01
Nowadays storing data derived from deep sequencing experiments has become pivotal and standard compression algorithms do not exploit in a satisfying manner their structure. A number of reference-based compression algorithms have been developed but they are less adequate when approaching new species without fully sequenced genomes or nongenomic data. We developed a tool that takes advantages of fastq characteristics and encodes them in a binary format optimized in order to be further compressed with standard tools (such as gzip or lzma). The algorithm is straightforward and does not need any external reference file, it scans the fastq only once and has a constant memory requirement. Moreover, we added the possibility to perform lossy compression, losing some of the original information (IDs and/or qualities) but resulting in smaller files; it is also possible to define a quality cutoff under which corresponding base calls are converted to N. We achieve 2.82 to 7.77 compression ratios on various fastq files without losing information and 5.37 to 8.77 losing IDs, which are often not used in common analysis pipelines. In this paper, we compare the algorithm performance with known tools, usually obtaining higher compression levels.
Hayashi, Yoshihiro; Tsuji, Takahiro; Shirotori, Kaede; Oishi, Takuya; Kosugi, Atsushi; Kumada, Shungo; Hirai, Daijiro; Takayama, Kozo; Onuki, Yoshinori
2017-10-30
In this study, we evaluated the correlation between the response surfaces for the tablet characteristics of placebo and active pharmaceutical ingredient (API)-containing tablets. The quantities of lactose, cornstarch, and microcrystalline cellulose were chosen as the formulation factors. Ten tablet formulations were prepared. The tensile strength (TS) and disintegration time (DT) of tablets were measured as tablet characteristics. The response surfaces for TS and DT were estimated using a nonlinear response surface method incorporating multivariate spline interpolation, and were then compared with those of placebo tablets. A correlation was clearly observed for TS and DT of all APIs, although the value of the response surfaces for TS and DT was highly dependent on the type of API used. Based on this knowledge, the response surfaces for TS and DT of API-containing tablets were predicted from only two and four formulations using regression expression and placebo tablet data, respectively. The results from the evaluation of prediction accuracy showed that this method accurately predicted TS and DT, suggesting that it could construct a reliable response surface for TS and DT with a small number of samples. This technique assists in the effective estimation of the relationships between design variables and pharmaceutical responses during pharmaceutical development. Copyright © 2017 Elsevier B.V. All rights reserved.
A non-oscillatory energy-splitting method for the computation of compressible multi-fluid flows
Lei, Xin; Li, Jiequan
2018-04-01
This paper proposes a new non-oscillatory energy-splitting conservative algorithm for computing multi-fluid flows in the Eulerian framework. In comparison with existing multi-fluid algorithms in the literature, it is shown that the mass fraction model with isobaric hypothesis is a plausible choice for designing numerical methods for multi-fluid flows. Then we construct a conservative Godunov-based scheme with the high order accurate extension by using the generalized Riemann problem solver, through the detailed analysis of kinetic energy exchange when fluids are mixed under the hypothesis of isobaric equilibrium. Numerical experiments are carried out for the shock-interface interaction and shock-bubble interaction problems, which display the excellent performance of this type of schemes and demonstrate that nonphysical oscillations are suppressed around material interfaces substantially.
International Nuclear Information System (INIS)
Ramdan, R.D.; Jauhari, I.; Hasan, R.; Masdek, N.R. Nik
2008-01-01
This paper describes an implementation of superplastic deformation method for the deposition of carbonated-apatite (CAP) on the well-know titanium alloy, Ti6Al4V. This deposition process was carried out using high-temperature compression test machine, at temperature of 775 deg. C, different strain rates, and conducted along the elastic region of the sample. Before the process, titanium substrate was cryogenically treated in order to approach superplastic characteristic during the process. After the process, thin film of CAP was created on the substrate with the thickness from 0.71 μm to 1.42 μm. The resulted film has a high density of CAP that covered completely the surface of the substrate. From the stress-strain relation chart, it can be observed that as the strain rate decreases, the area under stress-strain chart also decreases. This condition influences the density of CAP layer on the substrate that as this area decreases, the density of CAP layer also decreases as also confirmed by X-ray diffraction characterization. In addition, since the resulting layer of CAP is in the form of thin film, this layer did not alter the hardness of the substrate as measured by Vickers hardness test method. On the other hand, the resulting films also show a good bonding strength properties as the layer remain exist after friction test against polishing clothes for 1 h
Directory of Open Access Journals (Sweden)
S. Lamultree
2017-04-01
Full Text Available This paper presents a theoretical analysis of moving reference planes associated with unit cells of nonreciprocal lossy periodic transmission-line structures (NRLSPTLSs by the equivalent bi-characteristic-impedance transmission line (BCITL model. Applying the BCITL theory, only the equivalent BCITL parameters (characteristic impedances for waves propagating in forward and reverse directions and associated complex propagation constants are of interest. An infinite NRLSPTLS is considered first by shifting a reference position of unit cells along TLs of interest. Then, a semi-infinite terminated NRLSPTLS is investigated in terms of associated load reflection coefficients. It is found that the equivalent BCITL characteristic impedances of the original and shifted unit cells are mathematically related by the bilinear transformation. In addition, the associated load reflection coefficients of both unit cells are mathematically related by the bilinear transformation. However, the equivalent BCITL complex propagation constants remain unchanged. Numerical results are provided to show the validity of the proposed theoretical analysis.
Sun, Qilin
2017-04-01
High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.
International Nuclear Information System (INIS)
Saleh, K.
2012-01-01
This thesis deals with the Baer-Nunziato two-phase flow model. The main objective of this work is to propose some techniques to cope with phase vanishing regimes which produce important instabilities in the model and its numerical simulations. Through analysis and simulation methods using Suliciu relaxation approximations, we prove that in these regimes, the solutions can be stabilised by introducing some extra dissipation of the total mixture entropy. In a first approach, called the Eulerian approach, the exact resolution of the relaxation Riemann problem provides an accurate entropy-satisfying numerical scheme, which turns out to be much more efficient in terms of CPU-cost than the classical and very simple Rusanov's scheme. Moreover, the scheme is proved to handle the vanishing phase regimes with great stability. The scheme, first developed in 1D, is then extended in 3D and implemented in an industrial code developed by EDF. The second approach, called the acoustic splitting approach, considers a separation of fast acoustic waves from slow material waves. The objective is to avoid the resonance due to the interaction between these two types of waves, and to allow an implicit treatment of the acoustics, while material waves are explicitly discretized. The resulting scheme is very simple and allows to deal simply with phase vanishing. The originality of this work is to use new dissipative closure laws for the interfacial velocity and pressure, in order to control the solutions of the Riemann problem associated with the acoustic step, in the phase vanishing regimes. (author)
International Nuclear Information System (INIS)
Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo
2004-01-01
This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain
Usha, Sruthi P; Gupta, Banshi D
2018-03-15
A lossy mode resonance (LMR) based sensor for urinary p-cresol testing on optical fiber substrate is developed. The sensor probe fabrication includes dip coating of nanocomposite layer of zinc oxide and molybdenum sulphide (ZnO/MoS 2 ) over unclad core of optical fiber as the transducer layer followed by the layer of molecular imprinted polymer (MIP) as the recognition medium. The addition of molybdenum sulphide in the transducer layer increases the absorption of light in the medium which enhances the LMR properties of zinc oxide thereby increasing the conductivity and hence the sensitivity of the sensor. The sensor probe is characterized for p-cresol concentration range from 0µM (reference sample) to 1000µM in artificially prepared urine. Optimizations of various probe fabrication parameters are carried to bring out the sensor's optimal performance with a sensitivity of 11.86nm/µM and 28nM as the limit of detection (LOD). A two-order improvement in LOD is obtained as compared to the recently reported p-cresol sensor. The proposed sensor possesses a response time of 15s which is 8 times better than that reported in the literature utilizing electrochemical method. Its response time is also better than the p-cresol sensor currently available in the market for the medical field. Thus, with a fast response, significant stability and repeatability, the proposed sensor holds practical implementation possibilities in the medical field. Further, the realization of sensor probe over optical fiber substrate adds remote sensing and online monitoring feasibilities. Copyright © 2017 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Vilhjalmsson, Dadi; Appelros, Stefan; Toth, Ervin
2015-01-01
-sided colonic resection. Time for evacuation of the anastomotic rings, perioperative compression pressure, and adverse effects were recorded. Postoperative blood samples were collected daily, and flexible sigmoidoscopy was performed 8-12 weeks after surgery to examine the anastomoses. RESULTS: Fourteen out......BACKGROUND: Compression anastomotic ring-locking procedure (CARP) is a novel procedure for creating colonic anastomoses. The surgical procedure allows perioperative quantification of the compression pressure between the intestinal ends within the anastomosis and postoperative monitoring...... device evacuated spontaneously in all patients by the natural route after a median of 10 days. Perioperative compression pressure ranged between 85 and 280 mBar (median 130 mBar). Flexible sigmoidoscopy revealed smooth anastomoses without signs of pathological inflammation or stenosis in all cases...
Directory of Open Access Journals (Sweden)
Mustafa Kaplanoğlu
2013-01-01
Full Text Available Mullerian duct anomalies may cause obstetric complications, such as postpartum hemorrhage (PPH and placental adhesion anomalies. Uterine compression suture may be useful for controlling PPH (especially atony. In recent studies, uterine compression sutures have been used in placenta accreta. We report a case of PPH, a placenta accreta accompanying a large septae, treated with B-Lynch suture and intrauterine gauze tampon.
Adaptive Methods for Compressible Flow
1994-03-01
labor -intensive task of purpose of this work is to demonstrate the generating acceptable surface triangulations, advantages of integrating the CAD/CAM...sintilar results). L 1 (’-1)(2sn~p) boundary error (MUSCL) The flow variables wre then given by .04 .78% M=asOIne/i .02 AM% v= acosO /sintt .01 .0 p
A Compressive Superresolution Display
Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh; Heidrich, Wolfgang
2014-01-01
In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.
A Compressive Superresolution Display
Heide, Felix
2014-06-22
In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.
Microbunching and RF Compression
International Nuclear Information System (INIS)
Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.
2010-01-01
Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.
Mining compressing sequential problems
Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.
2012-01-01
Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and
Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan
2014-10-01
It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.
Fast lossless compression via cascading Bloom filters.
Rozov, Roye; Shamir, Ron; Halperin, Eran
2014-01-01
Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space
Energy Technology Data Exchange (ETDEWEB)
Higgins, J.D.; Burger, P.A. [Colorado School of Mines, Golden, CO (United States); Yang, L.C. [Geological Survey, Denver, CO (United States)
1997-12-31
Study of the hydrologic system at Yucca Mountain, Nevada, requires extraction of pore-water samples from unsaturated tuff bedrock. Two generations of compression cells have been designed and tested for extracting representative, unaltered pore-water samples from unsaturated tuff cores. The one-dimensional compression cell has a maximum compressive stress rating of 552 MPa. Results from 86 tests show that the minimum degree of saturation for successful extraction of pore water was about 14% for non welded tuff and about 61% for densely welded tuff. The high-pressure, one-dimensional compression cell has a maximum compressive stress rating of 827 MPa. Results from 109 tests show that the minimum degree of saturation for successful extraction of pore water was about 7.5% for non welded tuff and about 34% for densely welded tuff. Geochemical analyses show that, in general, there is a decrease in ion concentration of pore waters as extraction pressures increase. Only small changes in pore-water composition occur during the one-dimensional extraction test.
Energy Technology Data Exchange (ETDEWEB)
Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)
2017-07-01
Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H_{2} at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H_{2}) gas compressor with a
A Method to Detect AAC Audio Forgery
Directory of Open Access Journals (Sweden)
Qingzhong Liu
2015-08-01
Full Text Available Advanced Audio Coding (AAC, a standardized lossy compression scheme for digital audio, which was designed to be the successor of the MP3 format, generally achieves better sound quality than MP3 at similar bit rates. While AAC is also the default or standard audio format for many devices and AAC audio files may be presented as important digital evidences, the authentication of the audio files is highly needed but relatively missing. In this paper, we propose a scheme to expose tampered AAC audio streams that are encoded at the same encoding bit-rate. Specifically, we design a shift-recompression based method to retrieve the differential features between the re-encoded audio stream at each shifting and original audio stream, learning classifier is employed to recognize different patterns of differential features of the doctored forgery files and original (untouched audio files. Experimental results show that our approach is very promising and effective to detect the forgery of the same encoding bit-rate on AAC audio streams. Our study also shows that shift recompression-based differential analysis is very effective for detection of the MP3 forgery at the same bit rate.
Compression for radiological images
Wilson, Dennis L.
1992-07-01
The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.
Compression of Probabilistic XML documents
Veldman, Irma
2009-01-01
Probabilistic XML (PXML) files resulting from data integration can become extremely large, which is undesired. For XML there are several techniques available to compress the document and since probabilistic XML is in fact (a special form of) XML, it might benefit from these methods even more. In
Compression of Probabilistic XML Documents
Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice
Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.
Compression of Short Text on Embedded Systems
DEFF Research Database (Denmark)
Rein, S.; Gühmann, C.; Fitzek, Frank
2006-01-01
The paper details a scheme for lossless compression of a short data series larger than 50 bytes. The method uses arithmetic coding and context modelling with a low-complexity data model. A data model that takes 32 kBytes of RAM already cuts the data size in half. The compression scheme just takes...
Mammography parameters: compression, dose, and discomfort
International Nuclear Information System (INIS)
Blanco, S.; Di Risio, C.; Andisco, D.; Rojas, R.R.; Rojas, R.M.
2017-01-01
Objective: To confirm the importance of compression in mammography and relate it to the discomfort expressed by the patients. Materials and methods: Two samples of 402 and 268 mammographies were obtained from two diagnostic centres that use the same mammographic equipment, but different compression techniques. The patient age range was from 21 to 50 years old. (authors) [es
Hardware compression using common portions of data
Chang, Jichuan; Viswanathan, Krishnamurthy
2015-03-24
Methods and devices are provided for data compression. Data compression can include receiving a plurality of data chunks, sampling at least some of the plurality of data chunks extracting a common portion from a number of the plurality of data chunks based on the sampling, and storing a remainder of the plurality of data chunks in memory.
Energy Technology Data Exchange (ETDEWEB)
Grohs, J.G.; Krepler, P. [Orthopaedische Klinik, Universitaet Wien (Austria)
2004-03-01
Minimal invasive stabilizations represent a new alternative for the treatment of osteoporotic compression fractures. Vertebroplasty and balloon kyphoplasty are two methods to enhance the strength of osteoporotic vertebral bodies by the means of cement application. Vertebroplasty is the older and technically easier method. The balloon kyphoplasty is the newer and more expensive method which does not only improve pain but also restores the sagittal profile of the spine. By balloon kyphoplasty the height of 101 fractured vertebral bodies could be increased up to 90% and the wedge decreased from 12 to 7 degrees. Pain was reduced from 7,2 to 2,5 points. The Oswestry disability index decreased from 60 to 26 points. This effects persisted over a period of two years. Cement leakage occurred in only 2% of vertebral bodies. Fractures of adjacent vertebral bodies were found in 11%. Good preinterventional diagnostics and intraoperative imaging are necessary to make the balloon kyphoplasty a successful application. (orig.) [German] Minimal-invasive Stabilisierungen stellen eine Alternative zur bisherigen Behandlung osteoporotischer Wirbelfrakturen dar. Die Vertebroplastie und die Ballonkyphoplastik sind 2 Verfahren, um die Festigkeit der Wirbelkoerper nach osteoporotischen Kompressionsfrakturen durch Einbringen von Knochenzement wieder herzustellen. Die Vertebroplastie ist die aeltere, technisch einfachere und kostenguenstigere Technik, geht allerdings regelmaessig mit Zementaustritt einher. Die Ballonkyphoplastik ist die neuere kostenintensivere Technologie, mit der abgesehen von der Schmerzreduktion auch die Wiederherstellung des sagittalen Profils der Wirbelsaeule angestrebt werden kann. Mit der Ballonkyphoplastik konnten bei 101 frakturierten Wirbelkoerpern die Hoehe auf fast 90% des Sollwertes angehoben und die lokale Kyphose von 12 auf 7 vermindert werden. Die Schmerzen wurden - gemessen anhand einer 10-teiligen Skala - von 7,2 auf 2,5 reduziert. Der Oswestry disability
Stress analysis of shear/compression test
International Nuclear Information System (INIS)
Nishijima, S.; Okada, T.; Ueno, S.
1997-01-01
Stress analysis has been made on the glass fiber reinforced plastics (GFRP) subjected to the combined shear and compression stresses by means of finite element method. The two types of experimental set up were analyzed, that is parallel and series method where the specimen were compressed by tilted jigs which enable to apply the combined stresses, to the specimen. Modified Tsai-Hill criterion was employed to judge the failure under the combined stresses that is the shear strength under the compressive stress. The different failure envelopes were obtained between the two set ups. In the parallel system the shear strength once increased with compressive stress then decreased. On the contrary in the series system the shear strength decreased monotonicly with compressive stress. The difference is caused by the different stress distribution due to the different constraint conditions. The basic parameters which control the failure under the combined stresses will be discussed
Nonstop lose-less data acquisition and storing method for plasma motion images
International Nuclear Information System (INIS)
Nakanishi, Hideya; Ohsuna, Masaki; Kojima, Mamoru; Nonomura, Miki; Nagayama, Yoshio; Kawahata, Kazuo; Imazu, Setsuo; Okumura, Haruhiko
2007-01-01
Plasma diagnostic data analysis often requires the original raw data as they are, in other words, at the same frame rate and resolution of the CCD camera sensor. As a non-interlace VGA camera typically generates over 70 MB/s video stream, usual frame grabber cards apply the lossy compression encoder, such as mpeg-1/-2 or mpeg-4, to drastically lessen the bit rate. In this study, a new approach, which makes it possible to acquire and store such the wideband video stream without any quality reduction, has been successfully achieved. Simultaneously, the real-time video streaming is even possible at the original frame rate. For minimising the exclusive access time in every data storing, it has adopted the directory structure to hold every frame files separately, instead of one long consecutive file. The popular 'zip' archive method improves the portability of data files, however, the JPEG-LS image compression is applied inside by replacing its intrinsic deflate/inflate algorithm that has less performances for image data. (author)
Deriving average soliton equations with a perturbative method
International Nuclear Information System (INIS)
Ballantyne, G.J.; Gough, P.T.; Taylor, D.P.
1995-01-01
The method of multiple scales is applied to periodically amplified, lossy media described by either the nonlinear Schroedinger (NLS) equation or the Korteweg--de Vries (KdV) equation. An existing result for the NLS equation, derived in the context of nonlinear optical communications, is confirmed. The method is then applied to the KdV equation and the result is confirmed numerically
Limiting density ratios in piston-driven compressions
International Nuclear Information System (INIS)
Lee, S.
1985-07-01
By using global energy and pressure balance applied to a shock model it is shown that for a piston-driven fast compression, the maximum compression ratio is not dependent on the absolute magnitude of the piston power, but rather on the power pulse shape. Specific cases are considered and a maximum density compression ratio of 27 is obtained for a square-pulse power compressing a spherical pellet with specific heat ratio of 5/3. Double pulsing enhances the density compression ratio to 1750 in the case of linearly rising compression pulses. Using this method further enhancement by multiple pulsing becomes obvious. (author)
Isostatic compression of buffer blocks. Middle scale
International Nuclear Information System (INIS)
Ritola, J.; Pyy, E.
2012-01-01
Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately
Isostatic compression of buffer blocks. Middle scale
Energy Technology Data Exchange (ETDEWEB)
Ritola, J.; Pyy, E. [VTT Technical Research Centre of Finland, Espoo (Finland)
2012-01-15
Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately
Highly Efficient Compression Algorithms for Multichannel EEG.
Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda
2018-05-01
The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.
Context-dependent JPEG backward-compatible high-dynamic range image compression
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
Directory of Open Access Journals (Sweden)
Jianping Hua
2004-01-01
Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.
Force balancing in mammographic compression
International Nuclear Information System (INIS)
Branderhorst, W.; Groot, J. E. de; Lier, M. G. J. T. B. van; Grimbergen, C. A.; Neeter, L. M. F. H.; Heeten, G. J. den; Neeleman, C.
2016-01-01
Purpose: In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body during compression. This leads to unnecessary stretching of the skin and other tissues around the breast, which can make the imaging procedure more painful for the patient. The goal of this study was to implement a method to measure and minimize the force imbalance, and to assess its feasibility as an objective and reproducible method of setting the image receptor height. Methods: A trial was conducted consisting of 13 craniocaudal mammographic compressions on a silicone breast phantom, each with the image receptor positioned at a different height. The image receptor height was varied over a range of 12 cm. In each compression, the force exerted by the compression paddle was increased up to 140 N in steps of 10 N. In addition to the paddle force, the authors measured the force exerted by the image receptor and the reaction force exerted on the patient body by the ground. The trial was repeated 8 times, with the phantom remounted at a slightly different orientation and position between the trials. Results: For a given paddle force, the obtained results showed that there is always exactly one image receptor height that leads to a balance of the forces on the breast. For the breast phantom, deviating from this specific height increased the force imbalance by 9.4 ± 1.9 N/cm (6.7%) for 140 N paddle force, and by 7.1 ± 1.6 N/cm (17.8%) for 40 N paddle force. The results also show that in situations where the force exerted by the image receptor is not measured, the craniocaudal force imbalance can still be determined by positioning the patient on a weighing scale and observing the changes in displayed weight during the procedure. Conclusions: In mammographic breast
Embedment of Chlorpheniramine Maleate in Directly Compressed ...
African Journals Online (AJOL)
chlorpheniramine maleate (CPM) from its matrix tablets prepared by direct compression. Methods: Different ratios of compritol and kollidon SR (containing 50 % matrix component) in 1:1, 1:2, ... Magnesium stearate and hydrochloric acid were.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
Exploring compression techniques for ROOT IO
Zhang, Z.; Bockelman, B.
2017-10-01
ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.
Anisotropic Concrete Compressive Strength
DEFF Research Database (Denmark)
Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao
2017-01-01
When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...