WorldWideScience

Sample records for compressive sampling based

  1. Statistical conditional sampling for variable-resolution video compression.

    Directory of Open Access Journals (Sweden)

    Alexander Wong

    Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  2. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  3. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  4. Compressive Sampling of EEG Signals with Finite Rate of Innovation

    Directory of Open Access Journals (Sweden)

    Poh Kok-Kiong

    2010-01-01

    Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.

  5. Informational analysis for compressive sampling in radar imaging.

    Science.gov (United States)

    Zhang, Jingxiong; Yang, Ke

    2015-03-24

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  6. A 172 $\\mu$W Compressively Sampled Photoplethysmographic (PPG) Readout ASIC With Heart Rate Estimation Directly From Compressively Sampled Data.

    Science.gov (United States)

    Pamula, Venkata Rajesh; Valero-Sarmiento, Jose Manuel; Yan, Long; Bozkurt, Alper; Hoof, Chris Van; Helleputte, Nick Van; Yazicioglu, Refet Firat; Verhelst, Marian

    2017-06-01

    A compressive sampling (CS) photoplethysmographic (PPG) readout with embedded feature extraction to estimate heart rate (HR) directly from compressively sampled data is presented. It integrates a low-power analog front end together with a digital back end to perform feature extraction to estimate the average HR over a 4 s interval directly from compressively sampled PPG data. The application-specified integrated circuit (ASIC) supports uniform sampling mode (1x compression) as well as CS modes with compression ratios of 8x, 10x, and 30x. CS is performed through nonuniformly subsampling the PPG signal, while feature extraction is performed using least square spectral fitting through Lomb-Scargle periodogram. The ASIC consumes 172  μ W of power from a 1.2 V supply while reducing the relative LED driver power consumption by up to 30 times without significant loss of relevant information for accurate HR estimation.

  7. Sampling theory, a renaissance compressive sensing and other developments

    CERN Document Server

    2015-01-01

    Reconstructing or approximating objects from seemingly incomplete information is a frequent challenge in mathematics, science, and engineering. A multitude of tools designed to recover hidden information are based on Shannon’s classical sampling theorem, a central pillar of Sampling Theory. The growing need to efficiently obtain precise and tailored digital representations of complex objects and phenomena requires the maturation of available tools in Sampling Theory as well as the development of complementary, novel mathematical theories. Today, research themes such as Compressed Sensing and Frame Theory re-energize the broad area of Sampling Theory. This volume illustrates the renaissance that the area of Sampling Theory is currently experiencing. It touches upon trendsetting areas such as Compressed Sensing, Finite Frames, Parametric Partial Differential Equations, Quantization, Finite Rate of Innovation, System Theory, as well as sampling in Geometry and Algebraic Topology.

  8. Architectural Design Space Exploration of an FPGA-based Compressed Sampling Engine

    DEFF Research Database (Denmark)

    El-Sayed, Mohammad; Koch, Peter; Le Moullec, Yannick

    2015-01-01

    We present the architectural design space exploration of a compressed sampling engine for use in a wireless heart-rate monitoring system. We show how parallelism affects execution time at the register transfer level. Furthermore, two example solutions (modified semi-parallel and full...

  9. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  10. Study on the effects of sample selection on spectral reflectance reconstruction based on the algorithm of compressive sensing

    International Nuclear Information System (INIS)

    Zhang, Leihong; Liang, Dong

    2016-01-01

    In order to solve the problem that reconstruction efficiency and precision is not high, in this paper different samples are selected to reconstruct spectral reflectance, and a new kind of spectral reflectance reconstruction method based on the algorithm of compressive sensing is provided. Four different color numbers of matte color cards such as the ColorChecker Color Rendition Chart and Color Checker SG, the copperplate paper spot color card of Panton, and the Munsell colors card are chosen as training samples, the spectral image is reconstructed respectively by the algorithm of compressive sensing and pseudo-inverse and Wiener, and the results are compared. These methods of spectral reconstruction are evaluated by root mean square error and color difference accuracy. The experiments show that the cumulative contribution rate and color difference of the Munsell colors card are better than those of the other three numbers of color cards in the same conditions of reconstruction, and the accuracy of the spectral reconstruction will be affected by the training sample of different numbers of color cards. The key technology of reconstruction means that the uniformity and representation of the training sample selection has important significance upon reconstruction. In this paper, the influence of the sample selection on the spectral image reconstruction is studied. The precision of the spectral reconstruction based on the algorithm of compressive sensing is higher than that of the traditional algorithm of spectral reconstruction. By the MATLAB simulation results, it can be seen that the spectral reconstruction precision and efficiency are affected by the different color numbers of the training sample. (paper)

  11. Compressive sensing based ptychography image encryption

    Science.gov (United States)

    Rawat, Nitin

    2015-09-01

    A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.

  12. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  13. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  14. Signal Recovery in Compressive Sensing via Multiple Sparsifying Bases

    DEFF Research Database (Denmark)

    Wijewardhana, U. L.; Belyaev, Evgeny; Codreanu, M.

    2017-01-01

    is sparse is the key assumption utilized by such algorithms. However, the basis in which the signal is the sparsest is unknown for many natural signals of interest. Instead there may exist multiple bases which lead to a compressible representation of the signal: e.g., an image is compressible in different...... wavelet transforms. We show that a significant performance improvement can be achieved by utilizing multiple estimates of the signal using sparsifying bases in the context of signal reconstruction from compressive samples. Further, we derive a customized interior-point method to jointly obtain multiple...... estimates of a 2-D signal (image) from compressive measurements utilizing multiple sparsifying bases as well as the fact that the images usually have a sparse gradient....

  15. Permeability and compression characteristics of municipal solid waste samples

    Science.gov (United States)

    Durmusoglu, Ertan; Sanchez, Itza M.; Corapcioglu, M. Yavuz

    2006-08-01

    Four series of laboratory tests were conducted to evaluate the permeability and compression characteristics of municipal solid waste (MSW) samples. While the two series of tests were conducted using a conventional small-scale consolidometer, the two others were conducted in a large-scale consolidometer specially constructed for this study. In each consolidometer, the MSW samples were tested at two different moisture contents, i.e., original moisture content and field capacity. A scale effect between the two consolidometers with different sizes was investigated. The tests were carried out on samples reconsolidated to pressures of 123, 246, and 369 kPa. Time settlement data gathered from each load increment were employed to plot strain versus log-time graphs. The data acquired from the compression tests were used to back calculate primary and secondary compression indices. The consolidometers were later adapted for permeability experiments. The values of indices and the coefficient of compressibility for the MSW samples tested were within a relatively narrow range despite the size of the consolidometer and the different moisture contents of the specimens tested. The values of the coefficient of permeability were within a band of two orders of magnitude (10-6-10-4 m/s). The data presented in this paper agreed very well with the data reported by previous researchers. It was concluded that the scale effect in the compression behavior was significant. However, there was usually no linear relationship between the results obtained in the tests.

  16. Analysis on soil compressibility changes of samples stabilized with lime

    Directory of Open Access Journals (Sweden)

    Elena-Andreea CALARASU

    2016-12-01

    Full Text Available In order to manage and control the stability of buildings located on difficult foundation soils, several techniques of soil stabilization were developed and applied worldwide. Taking into account the major significance of soil compressibility on construction durability and safety, the soil stabilization with a binder like lime is considered one of the most used and traditional methods. The present paper aims to assess the effect of lime content on soil geotechnical parameters, especially on compressibility ones, based on laboratory experimental tests, for several soil categories in admixture with different lime dosages. The results of this study indicate a significant improvement of stabilized soil parameters, such as compressibility and plasticity, in comparison with natural samples. The effect of lime stabilization is related to an increase of soil structure stability by increasing the bearing capacity.

  17. Energy Preserved Sampling for Compressed Sensing MRI

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2014-01-01

    Full Text Available The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI. Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time.

  18. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    Science.gov (United States)

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  19. Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators

    International Nuclear Information System (INIS)

    Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens

    2012-01-01

    Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations

  20. Predicting the compressibility behaviour of tire shred samples for landfill applications.

    Science.gov (United States)

    Warith, M A; Rao, Sudhakar M

    2006-01-01

    Tire shreds have been used as an alternative to crushed stones (gravel) as drainage media in landfill leachate collection systems. The highly compressible nature of tire shreds (25-47% axial strain on vertical stress applications of 20-700 kPa) may reduce the thickness of the tire shred drainage layer to less than 300 mm (minimum design requirement) during the life of the municipal solid waste landfill. There hence exists a need to predict axial strains of tire shred samples in response to vertical stress applications so that the initial thickness of the tire shred drainage layer can be corrected for compression. The present study performs one-dimensional compressibility tests on four tire shred samples and compares the results with stress/strain curves from other studies. The stress/strain curves are developed into charts for choosing the correct initial thickness of tire shred layers that maintain the minimum thickness of 300 mm throughout the life of the landfill. The charts are developed for a range of vertical stresses based on the design height of municipal waste cell and bulk unit weight of municipal waste. Experimental results also showed that despite experiencing large axial strains, the average permeability of the tire shred sample consistently remained two to three orders of magnitude higher than the design performance criterion of 0.01cm/s for landfill drainage layers. Laboratory experiments, however, need to verify whether long-term chemical and bio-chemical reactions between landfill leachate and the tire shred layer will deteriorate their mechanical functions (hydraulic conductivity, compressibility, strength) beyond permissible limits for geotechnical applications.

  1. Online sparse representation for remote sensing compressed-sensed video sampling

    Science.gov (United States)

    Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li

    2014-11-01

    Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.

  2. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  3. The possibilities of compressed sensing based migration

    KAUST Repository

    Aldawood, Ali

    2013-09-22

    Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.

  4. The possibilities of compressed sensing based migration

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali

    2013-01-01

    Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.

  5. Development of a compressive sampling hyperspectral imager prototype

    Science.gov (United States)

    Barducci, Alessandro; Guzzi, Donatella; Lastri, Cinzia; Nardino, Vanni; Marcoionni, Paolo; Pippi, Ivan

    2013-10-01

    Compressive sensing (CS) is a new technology that investigates the chance to sample signals at a lower rate than the traditional sampling theory. The main advantage of CS is that compression takes place during the sampling phase, making possible significant savings in terms of the ADC, data storage memory, down-link bandwidth, and electrical power absorption. The CS technology could have primary importance for spaceborne missions and technology, paving the way to noteworthy reductions of payload mass, volume, and cost. On the contrary, the main CS disadvantage is made by the intensive off-line data processing necessary to obtain the desired source estimation. In this paper we summarize the CS architecture and its possible implementations for Earth observation, giving evidence of possible bottlenecks hindering this technology. CS necessarily employs a multiplexing scheme, which should produce some SNR disadvantage. Moreover, this approach would necessitate optical light modulators and 2-dim detector arrays of high frame rate. This paper describes the development of a sensor prototype at laboratory level that will be utilized for the experimental assessment of CS performance and the related reconstruction errors. The experimental test-bed adopts a push-broom imaging spectrometer, a liquid crystal plate, a standard CCD camera and a Silicon PhotoMultiplier (SiPM) matrix. The prototype is being developed within the framework of the ESA ITI-B Project titled "Hyperspectral Passive Satellite Imaging via Compressive Sensing".

  6. Methods for Sampling and Measurement of Compressed Air Contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Stroem, L

    1976-10-15

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  7. Methods for Sampling and Measurement of Compressed Air Contaminants

    International Nuclear Information System (INIS)

    Stroem, L.

    1976-10-01

    In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

  8. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  9. Atomic effect algebras with compression bases

    International Nuclear Information System (INIS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  10. Compressive and Flexural Tests on Adobe Samples Reinforced with Wire Mesh

    Science.gov (United States)

    Jokhio, G. A.; Al-Tawil, Y. M. Y.; Syed Mohsin, S. M.; Gul, Y.; Ramli, N. I.

    2018-03-01

    Adobe is an economical, naturally available, and environment friendly construction material that offers excellent thermal and sound insulations as well as indoor air quality. It is important to understand and enhance the mechanical properties of this material, where a high degree of variation is reported in the literature owing to lack of research and standardization in this field. The present paper focuses first on the understanding of mechanical behaviour of adobe subjected to compressive stresses as well as flexure and then on enhancing the same with the help of steel wire mesh as reinforcement. A total of 22 samples were tested out of which, 12 cube samples were tested for compressive strength, whereas 10 beams samples were tested for modulus of rupture. Half of the samples in each category were control samples i.e. without wire mesh reinforcement, whereas the remaining half were reinforced with a single layer of wire mesh per sample. It has been found that the compressive strength of adobe increases by about 43% after adding a single layer of wire mesh reinforcement. The flexural response of adobe has also shown improvement with the addition of wire mesh reinforcement.

  11. Compressed sampling for boundary measurements in three-dimensional electrical impedance tomography

    International Nuclear Information System (INIS)

    Javaherian, Ashkan; Soleimani, Manuchehr

    2013-01-01

    Electrical impedance tomography (EIT) utilizes electrodes on a medium's surface to produce measured data from which the conductivity distribution inside the medium is estimated. For the cases that relocation of electrodes is impractical or no a priori assumptions can be made to optimize the electrodes placement, a large number of electrodes may be needed to cover all possible imaging volume. This may occur in dynamically varying conductivity distribution in 3D EIT. Three-dimensional EIT then requires inverting very large linear systems to calculate the conductivity field, which causes significant problems regarding storage space and reconstruction time in addition to that data acquisition for a large number of electrodes will reduce the achievable frame rate, which is considered as major advantage of EIT imaging. This study proposes an idea to reduce the reconstruction complexity based on the well-known compressed sampling theory. By applying the so-called model-based CoSaMP algorithm to large size data collected by a 256 channel system, the size of forward operator and data acquisition time is reduced to those of a 32 channel system, while accuracy of reconstruction is significantly improved. The results demonstrate great capability of compressed sampling for overriding the challenges arising in 3D EIT. (paper)

  12. Cellular characterization of compression induced-damage in live biological samples

    Science.gov (United States)

    Bo, Chiara; Balzer, Jens; Hahnel, Mark; Rankin, Sara M.; Brown, Katherine A.; Proud, William G.

    2011-06-01

    Understanding the dysfunctions that high-intensity compression waves induce in human tissues is critical to impact on acute-phase treatments and requires the development of experimental models of traumatic damage in biological samples. In this study we have developed an experimental system to directly assess the impact of dynamic loading conditions on cellular function at the molecular level. Here we present a confinement chamber designed to subject live cell cultures in liquid environment to compression waves in the range of tens of MPa using a split Hopkinson pressure bars system. Recording the loading history and collecting the samples post-impact without external contamination allow the definition of parameters such as pressure and duration of the stimulus that can be related to the cellular damage. The compression experiments are conducted on Mesenchymal Stem Cells from BALB/c mice and the damage analysis are compared to two control groups. Changes in Stem cell viability, phenotype and function are assessed flow cytometry and with in vitro bioassays at two different time points. Identifying the cellular and molecular mechanisms underlying the damage caused by dynamic loading in live biological samples could enable the development of new treatments for traumatic injuries.

  13. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  14. Compressive Sensing of Roller Bearing Faults via Harmonic Detection from Under-Sampled Vibration Signals.

    Science.gov (United States)

    Tang, Gang; Hou, Wei; Wang, Huaqing; Luo, Ganggang; Ma, Jianwei

    2015-10-09

    The Shannon sampling principle requires substantial amounts of data to ensure the accuracy of on-line monitoring of roller bearing fault signals. Challenges are often encountered as a result of the cumbersome data monitoring, thus a novel method focused on compressed vibration signals for detecting roller bearing faults is developed in this study. Considering that harmonics often represent the fault characteristic frequencies in vibration signals, a compressive sensing frame of characteristic harmonics is proposed to detect bearing faults. A compressed vibration signal is first acquired from a sensing matrix with information preserved through a well-designed sampling strategy. A reconstruction process of the under-sampled vibration signal is then pursued as attempts are conducted to detect the characteristic harmonics from sparse measurements through a compressive matching pursuit strategy. In the proposed method bearing fault features depend on the existence of characteristic harmonics, as typically detected directly from compressed data far before reconstruction completion. The process of sampling and detection may then be performed simultaneously without complete recovery of the under-sampled signals. The effectiveness of the proposed method is validated by simulations and experiments.

  15. Compressive sampling by artificial neural networks for video

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt

    2011-06-01

    We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.

  16. Compressive sensing-based wideband capacitance measurement with a fixed sampling rate lower than the highest exciting frequency

    International Nuclear Information System (INIS)

    Xu, Lijun; Ren, Ying; Sun, Shijie; Cao, Zhang

    2016-01-01

    In this paper, an under-sampling method for wideband capacitance measurement was proposed by using the compressive sensing strategy. As the excitation signal is sparse in the frequency domain, the compressed sampling method that uses a random demodulator was adopted, which could greatly decrease the sampling rate. Besides, four switches were used to replace the multiplier in the random demodulator. As a result, not only the sampling rate can be much smaller than the signal excitation frequency, but also the circuit’s structure is simpler and its power consumption is lower. A hardware prototype was constructed to validate the method. In the prototype, an excitation voltage with a frequency up to 200 kHz was applied to a capacitance-to-voltage converter. The output signal of the converter was randomly modulated by a pseudo-random sequence through four switches. After a low-pass filter, the signal was sampled by an analog-to-digital converter at a sampling rate of 50 kHz, which was three times lower than the highest exciting frequency. The frequency and amplitude of the signal were then reconstructed to obtain the measured capacitance. Both theoretical analysis and experiments were carried out to show the feasibility of the proposed method and to evaluate the performance of the prototype, including its linearity, sensitivity, repeatability, accuracy and stability within a given measurement range. (paper)

  17. Harmonic analysis in integrated energy system based on compressed sensing

    International Nuclear Information System (INIS)

    Yang, Ting; Pen, Haibo; Wang, Dan; Wang, Zhaoxia

    2016-01-01

    Highlights: • We propose a harmonic/inter-harmonic analysis scheme with compressed sensing theory. • Property of sparseness of harmonic signal in electrical power system is proved. • The ratio formula of fundamental and harmonic components sparsity is presented. • Spectral Projected Gradient-Fundamental Filter reconstruction algorithm is proposed. • SPG-FF enhances the precision of harmonic detection and signal reconstruction. - Abstract: The advent of Integrated Energy Systems enabled various distributed energy to access the system through different power electronic devices. The development of this has made the harmonic environment more complex. It needs low complexity and high precision of harmonic detection and analysis methods to improve power quality. To solve the shortages of large data storage capacities and high complexity of compression in sampling under the Nyquist sampling framework, this research paper presents a harmonic analysis scheme based on compressed sensing theory. The proposed scheme enables the performance of the functions of compressive sampling, signal reconstruction and harmonic detection simultaneously. In the proposed scheme, the sparsity of the harmonic signals in the base of the Discrete Fourier Transform (DFT) is numerically calculated first. This is followed by providing a proof of the matching satisfaction of the necessary conditions for compressed sensing. The binary sparse measurement is then leveraged to reduce the storage space in the sampling unit in the proposed scheme. In the recovery process, the scheme proposed a novel reconstruction algorithm called the Spectral Projected Gradient with Fundamental Filter (SPG-FF) algorithm to enhance the reconstruction precision. One of the actual microgrid systems is used as simulation example. The results of the experiment shows that the proposed scheme effectively enhances the precision of harmonic and inter-harmonic detection with low computing complexity, and has good

  18. Curvelet-based compressive sensing for InSAR raw data

    Science.gov (United States)

    Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David

    2015-10-01

    The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications

  19. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  20. Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB

    Directory of Open Access Journals (Sweden)

    Wei Jin

    2015-01-01

    Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.

  1. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  2. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  3. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  4. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  5. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    Science.gov (United States)

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  7. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    Science.gov (United States)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  8. n-Gram-Based Text Compression

    Science.gov (United States)

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  9. n-Gram-Based Text Compression

    Directory of Open Access Journals (Sweden)

    Vu H. Nguyen

    2016-01-01

    Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  10. Light-weight reference-based compression of FASTQ data.

    Science.gov (United States)

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  11. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  12. EPC: A Provably Secure Permutation Based Compression Function

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid

    2010-01-01

    The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...

  13. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  14. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  15. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  16. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  17. Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Vane, Zachary Phillips; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2017-07-01

    Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendations on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.

  18. The effects of aging on compressive strength of low-level radioactive waste form samples

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.; Neilson, R.M. Jr.

    1996-06-01

    The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program, funded by the US Nuclear Regulatory Commission (NRC), is (a) studying the degradation effects in organic ion-exchange resins caused by radiation, (b) examining the adequacy of test procedures recommended in the Branch Technical Position on Waste Form to meet the requirements of 10 CFR 61 using solidified ion-exchange resins, (c) obtaining performance information on solidified ion-exchange resins in a disposal environment, and (d) determining the condition of liners used to dispose ion-exchange resins. Compressive tests were performed periodically over a 12-year period as part of the Technical Position testing. Results of that compressive testing are presented and discussed. During the study, both portland type I-II cement and Dow vinyl ester-styrene waste form samples were tested. This testing was designed to examine the effects of aging caused by self-irradiation on the compressive strength of the waste forms. Also presented is a brief summary of the results of waste form characterization, which has been conducted in 1986, using tests recommended in the Technical Position on Waste Form. The aging test results are compared to the results of those earlier tests. 14 refs., 52 figs., 5 tabs

  19. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  20. Disk-based compression of data from genome sequencing.

    Science.gov (United States)

    Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz

    2015-05-01

    High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Near-field acoustic holography using sparse regularization and compressive sampling principles.

    Science.gov (United States)

    Chardon, Gilles; Daudet, Laurent; Peillot, Antoine; Ollivier, François; Bertin, Nancy; Gribonval, Rémi

    2012-09-01

    Regularization of the inverse problem is a complex issue when using near-field acoustic holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, alternative regularization schemes can be developed based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e., the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of compressive sampling; under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization.

  2. Hardware compression using common portions of data

    Science.gov (United States)

    Chang, Jichuan; Viswanathan, Krishnamurthy

    2015-03-24

    Methods and devices are provided for data compression. Data compression can include receiving a plurality of data chunks, sampling at least some of the plurality of data chunks extracting a common portion from a number of the plurality of data chunks based on the sampling, and storing a remainder of the plurality of data chunks in memory.

  3. Identification of Coupled Map Lattice Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Dong Xie

    2016-01-01

    Full Text Available A novel approach for the parameter identification of coupled map lattice (CML based on compressed sensing is presented in this paper. We establish a meaningful connection between these two seemingly unrelated study topics and identify the weighted parameters using the relevant recovery algorithms in compressed sensing. Specifically, we first transform the parameter identification problem of CML into the sparse recovery problem of underdetermined linear system. In fact, compressed sensing provides a feasible method to solve underdetermined linear system if the sensing matrix satisfies some suitable conditions, such as restricted isometry property (RIP and mutual coherence. Then we give a low bound on the mutual coherence of the coefficient matrix generated by the observed values of CML and also prove that it satisfies the RIP from a theoretical point of view. If the weighted vector of each element is sparse in the CML system, our proposed approach can recover all the weighted parameters using only about M samplings, which is far less than the number of the lattice elements N. Another important and significant advantage is that if the observed data are contaminated with some types of noises, our approach is still effective. In the simulations, we mainly show the effects of coupling parameter and noise on the recovery rate.

  4. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC combined with image data compression (IDC approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE. Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS-based algorithm has better compression performance than the traditional compression approaches.

  5. Multispectral image compression based on DSC combined with CCSDS-IDC.

    Science.gov (United States)

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  6. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    International Nuclear Information System (INIS)

    Chouakri, S A; Djaafri, O; Taleb-Ahmed, A

    2013-01-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly

  7. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  8. Compressive sensing based wireless sensor for structural health monitoring

    Science.gov (United States)

    Bao, Yuequan; Zou, Zilong; Li, Hui

    2014-03-01

    Data loss is a common problem for monitoring systems based on wireless sensors. Reliable communication protocols, which enhance communication reliability by repetitively transmitting unreceived packets, is one approach to tackle the problem of data loss. An alternative approach allows data loss to some extent and seeks to recover the lost data from an algorithmic point of view. Compressive sensing (CS) provides such a data loss recovery technique. This technique can be embedded into smart wireless sensors and effectively increases wireless communication reliability without retransmitting the data. The basic idea of CS-based approach is that, instead of transmitting the raw signal acquired by the sensor, a transformed signal that is generated by projecting the raw signal onto a random matrix, is transmitted. Some data loss may occur during the transmission of this transformed signal. However, according to the theory of CS, the raw signal can be effectively reconstructed from the received incomplete transformed signal given that the raw signal is compressible in some basis and the data loss ratio is low. This CS-based technique is implemented into the Imote2 smart sensor platform using the foundation of Illinois Structural Health Monitoring Project (ISHMP) Service Tool-suite. To overcome the constraints of limited onboard resources of wireless sensor nodes, a method called random demodulator (RD) is employed to provide memory and power efficient construction of the random sampling matrix. Adaptation of RD sampling matrix is made to accommodate data loss in wireless transmission and meet the objectives of the data recovery. The embedded program is tested in a series of sensing and communication experiments. Examples and parametric study are presented to demonstrate the applicability of the embedded program as well as to show the efficacy of CS-based data loss recovery for real wireless SHM systems.

  9. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  10. Binaural model-based dynamic-range compression.

    Science.gov (United States)

    Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D

    2018-01-26

    Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.

  11. A reweighted ℓ1-minimization based compressed sensing for the spectral estimation of heart rate variability using the unevenly sampled data.

    Directory of Open Access Journals (Sweden)

    Szi-Wen Chen

    Full Text Available In this paper, a reweighted ℓ1-minimization based Compressed Sensing (CS algorithm incorporating the Integral Pulse Frequency Modulation (IPFM model for spectral estimation of HRV is introduced. Knowing as a novel sensing/sampling paradigm, the theory of CS asserts certain signals that are considered sparse or compressible can be possibly reconstructed from substantially fewer measurements than those required by traditional methods. Our study aims to employ a novel reweighted ℓ1-minimization CS method for deriving the spectrum of the modulating signal of IPFM model from incomplete RR measurements for HRV assessments. To evaluate the performance of HRV spectral estimation, a quantitative measure, referred to as the Percent Error Power (PEP that measures the percentage of difference between the true spectrum and the spectrum derived from the incomplete RR dataset, was used. We studied the performance of spectral reconstruction from incomplete simulated and real HRV signals by experimentally truncating a number of RR data accordingly in the top portion, in the bottom portion, and in a random order from the original RR column vector. As a result, for up to 20% data truncation/loss the proposed reweighted ℓ1-minimization CS method produced, on average, 2.34%, 2.27%, and 4.55% PEP in the top, bottom, and random data-truncation cases, respectively, on Autoregressive (AR model derived simulated HRV signals. Similarly, for up to 20% data loss the proposed method produced 5.15%, 4.33%, and 0.39% PEP in the top, bottom, and random data-truncation cases, respectively, on a real HRV database drawn from PhysioNet. Moreover, results generated by a number of intensive numerical experiments all indicated that the reweighted ℓ1-minimization CS method always achieved the most accurate and high-fidelity HRV spectral estimates in every aspect, compared with the ℓ1-minimization based method and Lomb's method used for estimating the spectrum of HRV from

  12. Implementation of a compressive sampling scheme for wireless sensors to achieve energy efficiency in a structural health monitoring system

    Science.gov (United States)

    O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.

    2013-04-01

    Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.

  13. Compressed sensing along physically plausible sampling trajectories in MRI

    International Nuclear Information System (INIS)

    Chauffert, Nicolas

    2015-01-01

    Magnetic Resonance Imaging (MRI) is a non-invasive and non-ionizing imaging technique that provides images of body tissues, using the contrast sensitivity coming from the magnetic parameters (T_1, T_2 and proton density). Data are acquired in the κ-space, corresponding to spatial Fourier frequencies. Because of physical constraints, the displacement in the κ-space is subject to kinematic constraints. Indeed, magnetic field gradients and their temporal derivative are upper bounded. Hence, the scanning time increases with the image resolution. Decreasing scanning time is crucial to improve patient comfort, decrease exam costs, limit the image distortions (eg, created by the patient movement), or decrease temporal resolution in functional MRI. Reducing scanning time can be addressed by Compressed Sensing (CS) theory. The latter is a technique that guarantees the perfect recovery of an image from under sampled data in κ-space, by assuming that the image is sparse in a wavelet basis. Unfortunately, CS theory cannot be directly cast to the MRI setting. The reasons are: i) acquisition (Fourier) and representation (wavelets) bases are coherent and ii) sampling schemes obtained using CS theorems are composed of isolated measurements and cannot be realistically implemented by magnetic field gradients: the sampling is usually performed along continuous or more regular curves. However, heuristic application of CS in MRI has provided promising results. In this thesis, we aim to develop theoretical tools to apply CS to MRI and other modalities. On the one hand, we propose a variable density sampling theory to answer the first impediment. The more the sample contains information, the more it is likely to be drawn. On the other hand, we propose sampling schemes and design sampling trajectories that fulfill acquisition constraints, while traversing the κ-space with the sampling density advocated by the theory. The second point is complex and is thus addressed step by step

  14. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  15. StirMark Benchmark: audio watermarking attacks based on lossy compression

    Science.gov (United States)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  16. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node

    Directory of Open Access Journals (Sweden)

    Kan Luo

    2018-01-01

    Full Text Available Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS- based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node’s specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM, block sparse Bayesian learning (BSBL method, and discrete cosine transform (DCT basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.

  17. A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.

    Science.gov (United States)

    Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing

    2018-01-01

    Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.

  18. Research of Block-Based Motion Estimation Methods for Video Compression

    Directory of Open Access Journals (Sweden)

    Tropchenko Andrey

    2016-08-01

    Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.

  19. A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients

    Directory of Open Access Journals (Sweden)

    Lei Yu

    2016-02-01

    Full Text Available Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1 they are susceptible to subjective factors; (2 they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information.

  20. A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients

    Science.gov (United States)

    Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping

    2016-01-01

    Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information. PMID:26861337

  1. Fractal Image Compression Based on High Entropy Values Technique

    Directory of Open Access Journals (Sweden)

    Douaa Younis Abbaas

    2018-04-01

    Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.

  2. Compression-based inference on graph data

    NARCIS (Netherlands)

    Bloem, P.; van den Bosch, A.; Heskes, T.; van Leeuwen, D.

    2013-01-01

    We investigate the use of compression-based learning on graph data. General purpose compressors operate on bitstrings or other sequential representations. A single graph can be represented sequentially in many ways, which may in uence the performance of sequential compressors. Using Normalized

  3. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    Science.gov (United States)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  4. Study of key technology of ghost imaging via compressive sensing for a phase object based on phase-shifting digital holography

    International Nuclear Information System (INIS)

    Leihong, Zhang; Dong, Liang; Bei, Li; Zilan, Pan; Dawei, Zhang; Xiuhua, Ma

    2015-01-01

    In this article, the algorithm of compressing sensing is used to improve the imaging resolution and realize ghost imaging via compressive sensing for a phase object based on the theoretical analysis of the lensless Fourier imaging of the algorithm of ghost imaging based on phase-shifting digital holography. The algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography uses the bucket detector to measure the total light intensity of the interference and the four-step phase-shifting method is used to obtain the total light intensity of differential interference light. The experimental platform is built based on the software simulation, and the experimental results show that the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography can obtain the high-resolution phase distribution figure of the phase object. With the same sampling times, the phase clarity of the phase distribution figure obtained by the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography is higher than that obtained by the algorithm of ghost imaging based on phase-shift digital holography. In this article, this study further extends the application range of ghost imaging and obtains the phase distribution of the phase object. (letter)

  5. Artificial neural network does better spatiotemporal compressive sampling

    Science.gov (United States)

    Lee, Soo-Young; Hsu, Charles; Szu, Harold

    2012-06-01

    Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.

  6. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  7. Compression of a Deep Competitive Network Based on Mutual Information for Underwater Acoustic Targets Recognition

    Directory of Open Access Journals (Sweden)

    Sheng Shen

    2018-04-01

    Full Text Available The accuracy of underwater acoustic targets recognition via limited ship radiated noise can be improved by a deep neural network trained with a large number of unlabeled samples. However, redundant features learned by deep neural network have negative effects on recognition accuracy and efficiency. A compressed deep competitive network is proposed to learn and extract features from ship radiated noise. The core idea of the algorithm includes: (1 Competitive learning: By integrating competitive learning into the restricted Boltzmann machine learning algorithm, the hidden units could share the weights in each predefined group; (2 Network pruning: The pruning based on mutual information is deployed to remove the redundant parameters and further compress the network. Experiments based on real ship radiated noise show that the network can increase recognition accuracy with fewer informative features. The compressed deep competitive network can achieve a classification accuracy of 89.1 % , which is 5.3 % higher than deep competitive network and 13.1 % higher than the state-of-the-art signal processing feature extraction methods.

  8. [Compression treatment for burned skin].

    Science.gov (United States)

    Jaafar, Fadhel; Lassoued, Mohamed A; Sahnoun, Mahdi; Sfar, Souad; Cheikhrouhou, Morched

    2012-02-01

    The regularity of a compressive knit is defined as its ability to perform its function in a burnt skin. This property is essential to avoid the phenomenon of rejection of the material or toxicity problems But: Make knits biocompatible with high burnet of human skin. We fabric knits of elastic material. To ensure good adhesion to the skin, we made elastic material, typically a tight loop knitted. The Length of yarn absorbed by stitch and the raw matter are changed with each sample. The physical properties of each sample are measured and compared. Surface modifications are made to these samples by impregnation of microcapsules based on jojoba oil. Knits are compressif, elastic in all directions, light, thin, comfortable, and washable for hygiene issues. In addition, the washing can find their compressive properties. The Jojoba Oil microcapsules hydrated the human burnet skin. This moisturizer is used to the firmness of the wound and it gives flexibility to the skin. Compressive Knits are biocompatible with burnet skin. The mixture of natural and synthetic fibers is irreplaceable in terms comfort and regularity.

  9. Image Compression Based On Wavelet, Polynomial and Quadtree

    Directory of Open Access Journals (Sweden)

    Bushra A. SULTAN

    2011-01-01

    Full Text Available In this paper a simple and fast image compression scheme is proposed, it is based on using wavelet transform to decompose the image signal and then using polynomial approximation to prune the smoothing component of the image band. The architect of proposed coding scheme is high synthetic where the error produced due to polynomial approximation in addition to the detail sub-band data are coded using both quantization and Quadtree spatial coding. As a last stage of the encoding process shift encoding is used as a simple and efficient entropy encoder to compress the outcomes of the previous stage.The test results indicate that the proposed system can produce a promising compression performance while preserving the image quality level.

  10. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  11. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  12. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  13. Vibration-based monitoring and diagnostics using compressive sensing

    Science.gov (United States)

    Ganesan, Vaahini; Das, Tuhin; Rahnavard, Nazanin; Kauffman, Jeffrey L.

    2017-04-01

    Vibration data from mechanical systems carry important information that is useful for characterization and diagnosis. Standard approaches rely on continually streaming data at a fixed sampling frequency. For applications involving continuous monitoring, such as Structural Health Monitoring (SHM), such approaches result in high volume data and rely on sensors being powered for prolonged durations. Furthermore, for spatial resolution, structures are instrumented with a large array of sensors. This paper shows that both volume of data and number of sensors can be reduced significantly by applying Compressive Sensing (CS) in vibration monitoring applications. The reduction is achieved by using random sampling and capitalizing on the sparsity of vibration signals in the frequency domain. Preliminary experimental results validating CS-based frequency recovery are also provided. By exploiting the sparsity of mode shapes, CS can also enable efficient spatial reconstruction using fewer spatially distributed sensors. CS can thereby reduce the cost and power requirement of sensing as well as streamline data storage and processing in monitoring applications. In well-instrumented structures, CS can enable continued monitoring in case of sensor or computational failures.

  14. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  15. Simulation and experimental studies of three-dimensional (3D) image reconstruction from insufficient sampling data based on compressed-sensing theory for potential applications to dental cone-beam CT

    International Nuclear Information System (INIS)

    Je, U.K.; Lee, M.S.; Cho, H.S.; Hong, D.K.; Park, Y.O.; Park, C.K.; Cho, H.M.; Choi, S.I.; Woo, T.H.

    2015-01-01

    In practical applications of three-dimensional (3D) tomographic imaging, there are often challenges for image reconstruction from insufficient sampling data. In computed tomography (CT), for example, image reconstruction from sparse views and/or limited-angle (<360°) views would enable fast scanning with reduced imaging doses to the patient. In this study, we investigated and implemented a reconstruction algorithm based on the compressed-sensing (CS) theory, which exploits the sparseness of the gradient image with substantially high accuracy, for potential applications to low-dose, high-accurate dental cone-beam CT (CBCT). We performed systematic simulation works to investigate the image characteristics and also performed experimental works by applying the algorithm to a commercially-available dental CBCT system to demonstrate its effectiveness for image reconstruction in insufficient sampling problems. We successfully reconstructed CBCT images of superior accuracy from insufficient sampling data and evaluated the reconstruction quality quantitatively. Both simulation and experimental demonstrations of the CS-based reconstruction from insufficient data indicate that the CS-based algorithm can be applied directly to current dental CBCT systems for reducing the imaging doses and further improving the image quality

  16. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  17. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  18. Phase unwinding for dictionary compression with multiple channel transmission in magnetic resonance fingerprinting.

    Science.gov (United States)

    Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A

    2018-06-01

    Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  20. Wireless Sensor Networks Data Processing Summary Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Caiyun Huang

    2014-07-01

    Full Text Available As a newly proposed theory, compressive sensing (CS is commonly used in signal processing area. This paper investigates the applications of compressed sensing (CS in wireless sensor networks (WSNs. First, the development and research status of compressed sensing technology and wireless sensor networks are described, then a detailed investigation of WSNs research based on CS are conducted from aspects of data fusion, signal acquisition, signal routing transmission, and signal reconstruction. At the end of the paper, we conclude our survey and point out the possible future research directions.

  1. Compressed Sensing-Based Direct Conversion Receiver

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas; Larsen, Torben

    2012-01-01

    Due to the continuously increasing computational power of modern data receivers it is possible to move more and more processing from the analog to the digital domain. This paper presents a compressed sensing approach to relaxing the analog filtering requirements prior to the ADCs in a direct......-converted radio signals. As shown in an experiment presented in the article, when the proposed method is used, it is possible to relax the requirements for the quadrature down-converter filters. A random sampling device and an additional digital signal processing module is the price to pay for these relaxed...

  2. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  3. RNACompress: Grammar-based compression and informational complexity measurement of RNA secondary structure

    Directory of Open Access Journals (Sweden)

    Chen Chun

    2008-03-01

    Full Text Available Abstract Background With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. Results RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1 present a robust and effective way for RNA structural data compression; (2 design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. Conclusion A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool

  4. Memory hierarchy using row-based compression

    Science.gov (United States)

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  5. The influence of kind of coating additive on the compressive strength of RCA-based concrete prepared by triple-mixing method

    Science.gov (United States)

    Urban, K.; Sicakova, A.

    2017-10-01

    The paper deals with the use of alternative powder additives (fly ash and fine fraction of recycled concrete) to improve the recycled concrete aggregate and this occurs directly in the concrete mixing process. Specific mixing process (triple mixing method) is applied as it is favourable for this goal. Results of compressive strength after 2 and 28 days of hardening are given. Generally, using powder additives for coating the coarse recycled concrete aggregate in the first stage of triple mixing resulted in decrease of compressive strength, comparing the cement. There is no very important difference between samples based on recycled concrete aggregate and those based on natural aggregate as far as the cement is used for coating. When using both the fly ash and recycled concrete powder, the kind of aggregate causes more significant differences in compressive strength, with the values of those based on the recycled concrete aggregate being worse.

  6. Highly compressible and all-solid-state supercapacitors based on nanostructured composite sponge.

    Science.gov (United States)

    Niu, Zhiqiang; Zhou, Weiya; Chen, Xiaodong; Chen, Jun; Xie, Sishen

    2015-10-21

    Based on polyaniline-single-walled carbon nanotubes -sponge electrodes, highly compressible all-solid-state supercapacitors are prepared with an integrated configuration using a poly(vinyl alcohol) (PVA)/H2 SO4 gel as the electrolyte. The unique configuration enables the resultant supercapacitors to be compressed as an integrated unit arbitrarily during 60% compressible strain. Furthermore, the performance of the resultant supercapacitors is nearly unchanged even under 60% compressible strain. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. On music genre classification via compressive sampling

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    Recent work \\cite{Chang2010} combines low-level acoustic features and random projection (referred to as ``compressed sensing'' in \\cite{Chang2010}) to create a music genre classification system showing an accuracy among the highest reported for a benchmark dataset. This not only contradicts previ...

  8. Cloud solution for histopathological image analysis using region of interest based compression.

    Science.gov (United States)

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  9. Dynamic restoration mechanism and physically based constitutive model of 2050 Al–Li alloy during hot compression

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Ruihua; Liu, Qing [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Li, Jinfeng, E-mail: lijinfeng@csu.edu.cn [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Xiang, Sheng [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Chen, Yonglai; Zhang, Xuhu [Aerospace Research Institute of Materials and Processing Technology, Beijing 100076 (China)

    2015-11-25

    Dynamic restoration mechanism of 2050 Al–Li alloy and its constitutive model were investigated by means of hot compression simulation in the deformation temperature ranging from 340 to 500 °C and at strain rates of 0.001–10 s{sup −1}. The microstructures of the compressed samples were observed using optical microscopy and transmission electron microscopy. On the base of dislocation density theory and Avrami kinetics, a physically based constitutive model was established. The results show that dynamic recovery (DRV) and dynamic recrystallization (DRX) are co-responsible for the dynamic restoration during the hot compression process under all compression conditions. The dynamic precipitation (DPN) of T1 and σ phases was observed after the deformation at 340 °C. This is the first experimental evidence for the DPN of σ phase in Al–Cu–Li alloys. The particle stimulated nucleation of DRX (PSN-DRX) due to the large Al–Cu–Mn particle was also observed. The error analysis suggests that the established constitutive model can adequately describe the flow stress dependence on strain rate, temperature and strain during the hot deformation process. - Highlights: • The experimental evidence for the DPN of σ phase in Al–Cu–Li alloys was found. • The PSN-DRX due to the large Al–Cu–Mn particle was observed. • A novel method was proposed to calculated the stress multiplier α.

  10. EP-based wavelet coefficient quantization for linear distortion ECG data compression.

    Science.gov (United States)

    Hung, King-Chu; Wu, Tsung-Ching; Lee, Hsieh-Wei; Liu, Tung-Kuan

    2014-07-01

    Reconstruction quality maintenance is of the essence for ECG data compression due to the desire for diagnosis use. Quantization schemes with non-linear distortion characteristics usually result in time-consuming quality control that blocks real-time application. In this paper, a new wavelet coefficient quantization scheme based on an evolution program (EP) is proposed for wavelet-based ECG data compression. The EP search can create a stationary relationship among the quantization scales of multi-resolution levels. The stationary property implies that multi-level quantization scales can be controlled with a single variable. This hypothesis can lead to a simple design of linear distortion control with 3-D curve fitting technology. In addition, a competitive strategy is applied for alleviating data dependency effect. By using the ECG signals saved in MIT and PTB databases, many experiments were undertaken for the evaluation of compression performance, quality control efficiency, data dependency influence. The experimental results show that the new EP-based quantization scheme can obtain high compression performance and keep linear distortion behavior efficiency. This characteristic guarantees fast quality control even for the prediction model mismatching practical distortion curve. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Fast Compressive Tracking.

    Science.gov (United States)

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  12. Real time network traffic monitoring for wireless local area networks based on compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza

    2017-05-01

    A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN's signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.

  13. Correlation between k-space sampling pattern and MTF in compressed sensing MRSI.

    Science.gov (United States)

    Heikal, A A; Wachowicz, K; Fallone, B G

    2016-10-01

    To investigate the relationship between the k-space sampling patterns used for compressed sensing MR spectroscopic imaging (CS-MRSI) and the modulation transfer function (MTF) of the metabolite maps. This relationship may allow the desired frequency content of the metabolite maps to be quantitatively tailored when designing an undersampling pattern. Simulations of a phantom were used to calculate the MTF of Nyquist sampled (NS) 32 × 32 MRSI, and four-times undersampled CS-MRSI reconstructions. The dependence of the CS-MTF on the k-space sampling pattern was evaluated for three sets of k-space sampling patterns generated using different probability distribution functions (PDFs). CS-MTFs were also evaluated for three more sets of patterns generated using a modified algorithm where the sampling ratios are constrained to adhere to PDFs. Strong visual correlation as well as high R 2 was found between the MTF of CS-MRSI and the product of the frequency-dependant sampling ratio and the NS 32 × 32 MTF. Also, PDF-constrained sampling patterns led to higher reproducibility of the CS-MTF, and stronger correlations to the above-mentioned product. The relationship established in this work provides the user with a theoretical solution for the MTF of CS MRSI that is both predictable and customizable to the user's needs.

  14. Optical scanning holography based on compressive sensing using a digital micro-mirror device

    Science.gov (United States)

    A-qian, Sun; Ding-fu, Zhou; Sheng, Yuan; You-jun, Hu; Peng, Zhang; Jian-ming, Yue; xin, Zhou

    2017-02-01

    Optical scanning holography (OSH) is a distinct digital holography technique, which uses a single two-dimensional (2D) scanning process to record the hologram of a three-dimensional (3D) object. Usually, these 2D scanning processes are in the form of mechanical scanning, and the quality of recorded hologram may be affected due to the limitation of mechanical scanning accuracy and unavoidable vibration of stepper motor's start-stop. In this paper, we propose a new framework, which replaces the 2D mechanical scanning mirrors with a Digital Micro-mirror Device (DMD) to modulate the scanning light field, and we call it OSH based on Compressive Sensing (CS) using a digital micro-mirror device (CS-OSH). CS-OSH can reconstruct the hologram of an object through the use of compressive sensing theory, and then restore the image of object itself. Numerical simulation results confirm this new type OSH can get a reconstructed image with favorable visual quality even under the condition of a low sample rate.

  15. Direction-of-Arrival Estimation for Coprime Array Using Compressive Sensing Based Array Interpolation

    Directory of Open Access Journals (Sweden)

    Aihua Liu

    2017-01-01

    Full Text Available A method of direction-of-arrival (DOA estimation using array interpolation is proposed in this paper to increase the number of resolvable sources and improve the DOA estimation performance for coprime array configuration with holes in its virtual array. The virtual symmetric nonuniform linear array (VSNLA of coprime array signal model is introduced, with the conventional MUSIC with spatial smoothing algorithm (SS-MUSIC applied on the continuous lags in the VSNLA; the degrees of freedom (DoFs for DOA estimation are obviously not fully exploited. To effectively utilize the extent of DoFs offered by the coarray configuration, a compressing sensing based array interpolation algorithm is proposed. The compressing sensing technique is used to obtain the coarse initial DOA estimation, and a modified iterative initial DOA estimation based interpolation algorithm (IMCA-AI is then utilized to obtain the final DOA estimation, which maps the sample covariance matrix of the VSNLA to the covariance matrix of a filled virtual symmetric uniform linear array (VSULA with the same aperture size. The proposed DOA estimation method can efficiently improve the DOA estimation performance. The numerical simulations are provided to demonstrate the effectiveness of the proposed method.

  16. ROI-based DICOM image compression for telemedicine

    Indian Academy of Sciences (India)

    ground and reconstruct the image portions losslessly. The compressed image can ... If the image is compressed by 8:1 compression without any perceptual distortion, the ... Figure 2. Cross-sectional view of medical image (statistical representation). ... The Integer Wavelet Transform (IWT) is used to have lossless processing.

  17. Respiratory Motion Correction for Compressively Sampled Free Breathing Cardiac MRI Using Smooth l1-Norm Approximation

    Directory of Open Access Journals (Sweden)

    Muhammad Bilal

    2018-01-01

    Full Text Available Transformed domain sparsity of Magnetic Resonance Imaging (MRI has recently been used to reduce the acquisition time in conjunction with compressed sensing (CS theory. Respiratory motion during MR scan results in strong blurring and ghosting artifacts in recovered MR images. To improve the quality of the recovered images, motion needs to be estimated and corrected. In this article, a two-step approach is proposed for the recovery of cardiac MR images in the presence of free breathing motion. In the first step, compressively sampled MR images are recovered by solving an optimization problem using gradient descent algorithm. The L1-norm based regularizer, used in optimization problem, is approximated by a hyperbolic tangent function. In the second step, a block matching algorithm, known as Adaptive Rood Pattern Search (ARPS, is exploited to estimate and correct respiratory motion among the recovered images. The framework is tested for free breathing simulated and in vivo 2D cardiac cine MRI data. Simulation results show improved structural similarity index (SSIM, peak signal-to-noise ratio (PSNR, and mean square error (MSE with different acceleration factors for the proposed method. Experimental results also provide a comparison between k-t FOCUSS with MEMC and the proposed method.

  18. Effect of compression stockings on cutaneous microcirculation: Evaluation based on measurements of the skin thermal conductivity.

    Science.gov (United States)

    Grenier, E; Gehin, C; McAdams, E; Lun, B; Gobin, J-P; Uhl, J-F

    2016-03-01

    To study of the microcirculatory effects of elastic compression stockings. In phlebology, laser Doppler techniques (flux or imaging) are widely used to investigate cutaneous microcirculation. It is a method used to explore microcirculation by detecting blood flow in skin capillaries. Flux and imaging instruments evaluate, non-invasively in real-time, the perfusion of cutaneous micro vessels. Such tools, well known by the vascular community, are not really suitable to our protocol which requires evaluation through the elastic compression stockings fabric. Therefore, we involve another instrument, called the Hematron (developed by Insa-Lyon, Biomedical Sensor Group, Nanotechnologies Institute of Lyon), to investigate the relationship between skin microcirculatory activities and external compression provided by elastic compression stockings. The Hematron measurement principle is based on the monitoring of the skin's thermal conductivity. This clinical study examined a group of 30 female subjects, aged 42 years ±2 years, who suffer from minor symptoms of chronic venous disease, classified as C0s, and C1s (CEAP). The resulting figures show, subsequent to the pressure exerted by elastic compression stockings, an improvement of microcirculatory activities observed in 83% of the subjects, and a decreased effect was detected in the remaining 17%. Among the total population, the global average increase of the skin's microcirculatory activities is evaluated at 7.63% ± 1.80% (p compression stockings has a direct influence on the skin's microcirculation within this female sample group having minor chronic venous insufficiency signs. Further investigations are required for a deeper understanding of the elastic compression stockings effects on the microcirculatory activity in venous diseases at other stages of pathology. © The Author(s) 2014.

  19. Learning-based compressed sensing for infrared image super resolution

    Science.gov (United States)

    Zhao, Yao; Sui, Xiubao; Chen, Qian; Wu, Shaochi

    2016-05-01

    This paper presents an infrared image super-resolution method based on compressed sensing (CS). First, the reconstruction model under the CS framework is established and a Toeplitz matrix is selected as the sensing matrix. Compared with traditional learning-based methods, the proposed method uses a set of sub-dictionaries instead of two coupled dictionaries to recover high resolution (HR) images. And Toeplitz sensing matrix allows the proposed method time-efficient. Second, all training samples are divided into several feature spaces by using the proposed adaptive k-means classification method, which is more accurate than the standard k-means method. On the basis of this approach, a complex nonlinear mapping from the HR space to low resolution (LR) space can be converted into several compact linear mappings. Finally, the relationships between HR and LR image patches can be obtained by multi-sub-dictionaries and HR infrared images are reconstructed by the input LR images and multi-sub-dictionaries. The experimental results show that the proposed method is quantitatively and qualitatively more effective than other state-of-the-art methods.

  20. Subsampling-based compression and flow visualization

    Energy Technology Data Exchange (ETDEWEB)

    Agranovsky, Alexy; Camp, David; Joy, I; Childs, Hank

    2016-01-19

    As computational capabilities increasingly outpace disk speeds on leading supercomputers, scientists will, in turn, be increasingly unable to save their simulation data at its native resolution. One solution to this problem is to compress these data sets as they are generated and visualize the compressed results afterwards. We explore this approach, specifically subsampling velocity data and the resulting errors for particle advection-based flow visualization. We compare three techniques: random selection of subsamples, selection at regular locations corresponding to multi-resolution reduction, and introduce a novel technique for informed selection of subsamples. Furthermore, we explore an adaptive system which exchanges the subsampling budget over parallel tasks, to ensure that subsampling occurs at the highest rate in the areas that need it most. We perform supercomputing runs to measure the effectiveness of the selection and adaptation techniques. Overall, we find that adaptation is very effective, and, among selection techniques, our informed selection provides the most accurate results, followed by the multi-resolution selection, and with the worst accuracy coming from random subsamples.

  1. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  2. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    Science.gov (United States)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  3. An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF

    Directory of Open Access Journals (Sweden)

    Shikang Kong

    2017-02-01

    Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.

  4. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  5. Facial Image Compression Based on Structured Codebooks in Overcomplete Domain

    Directory of Open Access Journals (Sweden)

    Vila-Forcén JE

    2006-01-01

    Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.

  6. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  7. Optically compressed sensing by under sampling the polar Fourier plane

    International Nuclear Information System (INIS)

    Stern, A; Levi, O; Rivenson, Y

    2010-01-01

    In a previous work we presented a compressed imaging approach that uses a row of rotating sensors to capture indirectly polar strips of the Fourier transform of the image. Here we present further developments of this technique and present new results. The advantages of our technique, compared to other optically compressed imaging techniques, is that its optical implementation is relatively easy, it does not require complicate calibrations and that it can be implemented in near-real time.

  8. Loss less real-time data compression based on LZO for steady-state Tokamak DAS

    International Nuclear Information System (INIS)

    Pujara, H.D.; Sharma, Manika

    2008-01-01

    The evolution of data acquisition system (DAS) for steady-state operation of Tokamak has been technology driven. Steady-state Tokamak demands a data acquisition system which is capable enough to acquire data losslessly from diagnostics. The needs of loss less continuous acquisition have a significant effect on data storage and takes up a greater portion of any data acquisition systems. Another basic need of steady state of nature of operation demands online viewing of data which loads the LAN significantly. So there is strong demand for something that would control the expansion of both these portion by a way of employing compression technique in real time. This paper presents a data acquisition systems employing real-time data compression technique based on LZO. It is a data compression library which is suitable for data compression and decompression in real time. The algorithm used favours speed over compression ratio. The system has been rigged up based on PXI bus and dual buffer mode architecture is implemented for loss less acquisition. The acquired buffer is compressed in real time and streamed to network and hard disk for storage. Observed performance of measure on various data type like binary, integer float, types of different type of wave form as well as compression timing overheads has been presented in the paper. Various software modules for real-time acquiring, online viewing of data on network nodes have been developed in LabWindows/CVI based on client server architecture

  9. Stress relaxation in vanadium under shock and shockless dynamic compression

    International Nuclear Information System (INIS)

    Kanel, G. I.; Razorenov, S. V.; Garkushin, G. V.; Savinykh, A. S.; Zaretsky, E. B.

    2015-01-01

    Evolutions of elastic-plastic waves have been recorded in three series of plate impact experiments with annealed vanadium samples under conditions of shockless and combined ramp and shock dynamic compression. The shaping of incident wave profiles was realized using intermediate base plates made of different silicate glasses through which the compression waves were entered into the samples. Measurements of the free surface velocity histories revealed an apparent growth of the Hugoniot elastic limit with decreasing average rate of compression. The growth was explained by “freezing” of the elastic precursor decay in the area of interaction of the incident and reflected waves. A set of obtained data show that the current value of the Hugoniot elastic limit and plastic strain rate is rather associated with the rate of the elastic precursor decay than with the local rate of compression. The study has revealed the contributions of dislocation multiplications in elastic waves. It has been shown that independently of the compression history the material arrives at the minimum point between the elastic and plastic waves with the same density of mobile dislocations

  10. Understanding compressive deformation behavior of porous Ti using finite element analysis

    Energy Technology Data Exchange (ETDEWEB)

    Roy, Sandipan; Khutia, Niloy [Department of Aerospace Engineering and Applied Mechanics, Indian Institute of Engineering Science and Technology, Shibpur (India); Das, Debdulal [Department of Metallurgy and Materials Engineering, Indian Institute of Engineering Science and Technology, Shibpur (India); Das, Mitun, E-mail: mitun@cgcri.res.in [Bioceramics and Coating Division, CSIR-Central Glass and Ceramic Research Institute, Kolkata (India); Balla, Vamsi Krishna [Bioceramics and Coating Division, CSIR-Central Glass and Ceramic Research Institute, Kolkata (India); Bandyopadhyay, Amit [W. M. Keck Biomedical Materials Research Laboratory, School of Mechanical and Materials Engineering, Washington State University, Pullman, WA 99164 (United States); Chowdhury, Amit Roy, E-mail: arcbesu@gmail.com [Department of Aerospace Engineering and Applied Mechanics, Indian Institute of Engineering Science and Technology, Shibpur (India)

    2016-07-01

    In the present study, porous commercially pure (CP) Ti samples with different volume fraction of porosities were fabricated using a commercial additive manufacturing technique namely laser engineered net shaping (LENS™). Mechanical behavior of solid and porous samples was evaluated at room temperature under quasi-static compressive loading. Fracture surfaces of the failed samples were analyzed to determine the failure modes. Finite Element (FE) analysis using representative volume element (RVE) model and micro-computed tomography (CT) based model have been performed to understand the deformation behavior of laser deposited solid and porous CP-Ti samples. In vitro cell culture on laser processed porous CP-Ti surfaces showed normal cell proliferation with time, and confirmed non-toxic nature of these samples. - Highlights: • Porous CP-Ti samples fabricated using additive manufacturing technique • Compressive deformation behavior of porous samples closely matches with micro-CT and RVE based analysis • In vitro studies showed better cell proliferation with time on porous CP-Ti surfaces.

  11. Understanding compressive deformation behavior of porous Ti using finite element analysis

    International Nuclear Information System (INIS)

    Roy, Sandipan; Khutia, Niloy; Das, Debdulal; Das, Mitun; Balla, Vamsi Krishna; Bandyopadhyay, Amit; Chowdhury, Amit Roy

    2016-01-01

    In the present study, porous commercially pure (CP) Ti samples with different volume fraction of porosities were fabricated using a commercial additive manufacturing technique namely laser engineered net shaping (LENS™). Mechanical behavior of solid and porous samples was evaluated at room temperature under quasi-static compressive loading. Fracture surfaces of the failed samples were analyzed to determine the failure modes. Finite Element (FE) analysis using representative volume element (RVE) model and micro-computed tomography (CT) based model have been performed to understand the deformation behavior of laser deposited solid and porous CP-Ti samples. In vitro cell culture on laser processed porous CP-Ti surfaces showed normal cell proliferation with time, and confirmed non-toxic nature of these samples. - Highlights: • Porous CP-Ti samples fabricated using additive manufacturing technique • Compressive deformation behavior of porous samples closely matches with micro-CT and RVE based analysis • In vitro studies showed better cell proliferation with time on porous CP-Ti surfaces

  12. Beam steering performance of compressed Luneburg lens based on transformation optics

    Science.gov (United States)

    Gao, Ju; Wang, Cong; Zhang, Kuang; Hao, Yang; Wu, Qun

    2018-06-01

    In this paper, two types of compressed Luneburg lenses based on transformation optics are investigated and simulated using two different sources, namely, waveguides and dipoles, which represent plane and spherical wave sources, respectively. We determined that the largest beam steering angle and the related feed point are intrinsic characteristics of a certain type of compressed Luneburg lens, and that the optimized distance between the feed and lens, gain enhancement, and side-lobe suppression are related to the type of source. Based on our results, we anticipate that these lenses will prove useful in various future antenna applications.

  13. Compressive strength and microstructural analysis of fly ash/palm oil fuel ash based geopolymer mortar

    International Nuclear Information System (INIS)

    Ranjbar, Navid; Mehrali, Mehdi; Behnia, Arash; Alengaram, U. Johnson; Jumaat, Mohd Zamin

    2014-01-01

    Highlights: • Results show POFA is adaptable as replacement in FA based geopolymer mortar. • The increase in POFA/FA ratio delay of the compressive development of geopolymer. • The density of POFA based geoploymer is lower than FA based geopolymer mortar. - Abstract: This paper presents the effects and adaptability of palm oil fuel ash (POFA) as a replacement material in fly ash (FA) based geopolymer mortar from the aspect of microstructural and compressive strength. The geopolymers developed were synthesized with a combination of sodium hydroxide and sodium silicate as activator and POFA and FA as high silica–alumina resources. The development of compressive strength of POFA/FA based geopolymers was investigated using X-ray florescence (XRF), X-ray diffraction (XRD), Fourier transform infrared (FTIR), and field emission scanning electron microscopy (FESEM). It was observed that the particle shapes and surface area of POFA and FA as well as chemical composition affects the density and compressive strength of the mortars. The increment in the percentages of POFA increased the silica/alumina (SiO 2 /Al 2 O 3 ) ratio and that resulted in reduction of the early compressive strength of the geopolymer and delayed the geopolymerization process

  14. The statitistical evaluation of the uniaxial compressive strength of the Ruskov andesite

    Directory of Open Access Journals (Sweden)

    Krepelka František

    2002-03-01

    Full Text Available The selection of a suitable model of the statistical distribution of the uniaxial compressive strength is discussed in the paper. The uniaxial compressive strength was studied on 180 specimens of the Ruskov andesite. The rate of loading was 1MPa.s-1. The experimental specimens had a prismatic form with a square base; the slightness ratio of specimens was 2:1. Three sets of specimens with a different length of the base edge were studied, namely 50, 30 and 10 mm. The result of the measurement were three sets with 60 values of the uniaxial compressive strength. The basic statistical parameters: the sample mean, the sample standard deviation, the variational interval, the minimum and maximum value, the sample obliqueness coefficient and the sharpness coefficient were evaluated for each collection. Two types of the distribution which can be joined with the real physical fundamentals of the desintegration of rocks ( the normal and the Weibull distribution were tested. The two-parametric Weibull distribution was tested. The basic characteristics of both distributions were evaluated for each set and the accordance of the model distribution with an experimental distribution was tested. The ÷2-test was used for testing. The two-parametric Weibull distribution was selected following the comparison of the test results of both model distributions as a suitable distribution model for the characterization of uniaxial compressive strength of the Ruskov andesite. The two-parametric Weibull distribution showed better results of the goodness-of-fit test. The normal distribution was suitable for two sets; one of the sets showed a negative result of the goodness-of-fit testing. At the uniaxial compressive strength of the Ruskov andesite, a scale effect was registered : the mean value of uniaxial compressive strength decreases with increasing the specimen base edge. This is another argument for using the Weibull distribution as a suitable statistical model of the

  15. Toward topology-based characterization of small-scale mixing in compressible turbulence

    Science.gov (United States)

    Suman, Sawan; Girimaji, Sharath

    2011-11-01

    Turbulent mixing rate at small scales of motion (molecular mixing) is governed by the steepness of the scalar-gradient field which in turn is dependent upon the prevailing velocity gradients. Thus motivated, we propose a velocity-gradient topology-based approach for characterizing small-scale mixing in compressible turbulence. We define a mixing efficiency metric that is dependent upon the topology of the solenoidal and dilatational deformation rates of a fluid element. The mixing characteristics of solenoidal and dilatational velocity fluctuations are clearly delineated. We validate this new approach by employing mixing data from direct numerical simulations (DNS) of compressible decaying turbulence with passive scalar. For each velocity-gradient topology, we compare the mixing efficiency predicted by the topology-based model with the corresponding conditional scalar variance obtained from DNS. The new mixing metric accurately distinguishes good and poor mixing topologies and indeed reasonably captures the numerical values. The results clearly demonstrate the viability of the proposed approach for characterizing and predicting mixing in compressible flows.

  16. Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery

    Science.gov (United States)

    Xie, Hua; Klimesh, Matthew A.

    2009-01-01

    This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.

  17. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    Science.gov (United States)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  18. A design approach for systems based on magnetic pulse compression

    International Nuclear Information System (INIS)

    Praveen Kumar, D. Durga; Mitra, S.; Senthil, K.; Sharma, D. K.; Rajan, Rehim N.; Sharma, Archana; Nagesh, K. V.; Chakravarthy, D. P.

    2008-01-01

    A design approach giving the optimum number of stages in a magnetic pulse compression circuit and gain per stage is given. The limitation on the maximum gain per stage is discussed. The total system volume minimization is done by considering the energy storage capacitor volume and magnetic core volume at each stage. At the end of this paper, the design of a magnetic pulse compression based linear induction accelerator of 200 kV, 5 kA, and 100 ns with a repetition rate of 100 Hz is discussed with its experimental results

  19. Optical identity authentication technique based on compressive ghost imaging with QR code

    Science.gov (United States)

    Wenjie, Zhan; Leihong, Zhang; Xi, Zeng; Yi, Kang

    2018-04-01

    With the rapid development of computer technology, information security has attracted more and more attention. It is not only related to the information and property security of individuals and enterprises, but also to the security and social stability of a country. Identity authentication is the first line of defense in information security. In authentication systems, response time and security are the most important factors. An optical authentication technology based on compressive ghost imaging with QR codes is proposed in this paper. The scheme can be authenticated with a small number of samples. Therefore, the response time of the algorithm is short. At the same time, the algorithm can resist certain noise attacks, so it offers good security.

  20. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  1. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Shih, Tzu-Ching [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, 40402, Taiwan (China); Chen, Jeon-Hor; Nie Ke; Lin Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying [Tu and Yuen Center for Functional Onco-Imaging and Radiological Sciences, University of California, Irvine, CA 92697 (United States); Liu Dongxu; Sun Lizhi, E-mail: shih@mail.cmu.edu.t [Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 (United States)

    2010-07-21

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo (registered) 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc (registered) software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under

  2. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Photonic compressive sensing with a micro-ring-resonator-based microwave photonic filter

    DEFF Research Database (Denmark)

    Chen, Ying; Ding, Yunhong; Zhu, Zhijing

    2015-01-01

    A novel approach to realize photonic compressive sensing (CS) with a multi-tap microwave photonic filter is proposed and demonstrated. The system takes both advantages of CS and photonics to capture wideband sparse signals with sub-Nyquist sampling rate. The low-pass filtering function required...

  4. Simulation-Based Stochastic Sensitivity Analysis of a Mach 4.5 Mixed-Compression Intake Performance

    Science.gov (United States)

    Kato, H.; Ito, K.

    2009-01-01

    A sensitivity analysis of a supersonic mixed-compression intake of a variable-cycle turbine-based combined cycle (TBCC) engine is presented. The TBCC engine is de- signed to power a long-range Mach 4.5 transport capable of antipodal missions studied in the framework of an EU FP6 project, LAPCAT. The nominal intake geometry was designed using DLR abpi cycle analysis pro- gram by taking into account various operating require- ments of a typical mission profile. The intake consists of two movable external compression ramps followed by an isolator section with bleed channel. The compressed air is then diffused through a rectangular-to-circular subsonic diffuser. A multi-block Reynolds-averaged Navier- Stokes (RANS) solver with Srinivasan-Tannehill equilibrium air model was used to compute the total pressure recovery and mass capture fraction. While RANS simulation of the nominal intake configuration provides more realistic performance characteristics of the intake than the cycle analysis program, the intake design must also take into account in-flight uncertainties for robust intake performance. In this study, we focus on the effects of the geometric uncertainties on pressure recovery and mass capture fraction, and propose a practical approach to simulation-based sensitivity analysis. The method begins by constructing a light-weight analytical model, a radial-basis function (RBF) network, trained via adaptively sampled RANS simulation results. Using the RBF network as the response surface approximation, stochastic sensitivity analysis is performed using analysis of variance (ANOVA) technique by Sobol. This approach makes it possible to perform a generalized multi-input- multi-output sensitivity analysis based on high-fidelity RANS simulation. The resulting Sobol's influence indices allow the engineer to identify dominant parameters as well as the degree of interaction among multiple parameters, which can then be fed back into the design cycle.

  5. Efficient two-dimensional compressive sensing in MIMO radar

    Science.gov (United States)

    Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad

    2017-12-01

    Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.

  6. Assessment of compressive failure process of cortical bone materials using damage-based model.

    Science.gov (United States)

    Ng, Theng Pin; R Koloor, S S; Djuansjah, J R P; Abdul Kadir, M R

    2017-02-01

    The main failure factors of cortical bone are aging or osteoporosis, accident and high energy trauma or physiological activities. However, the mechanism of damage evolution coupled with yield criterion is considered as one of the unclear subjects in failure analysis of cortical bone materials. Therefore, this study attempts to assess the structural response and progressive failure process of cortical bone using a brittle damaged plasticity model. For this reason, several compressive tests are performed on cortical bone specimens made of bovine femur, in order to obtain the structural response and mechanical properties of the material. Complementary finite element (FE) model of the sample and test is prepared to simulate the elastic-to-damage behavior of the cortical bone using the brittle damaged plasticity model. The FE model is validated in a comparative method using the predicted and measured structural response as load-compressive displacement through simulation and experiment. FE results indicated that the compressive damage initiated and propagated at central region where maximum equivalent plastic strain is computed, which coincided with the degradation of structural compressive stiffness followed by a vast amount of strain energy dissipation. The parameter of compressive damage rate, which is a function dependent on damage parameter and the plastic strain is examined for different rates. Results show that considering a similar rate to the initial slope of the damage parameter in the experiment would give a better sense for prediction of compressive failure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. OTDM-WDM Conversion Based on Time-Domain Optical Fourier Transformation with Spectral Compression

    DEFF Research Database (Denmark)

    Mulvad, Hans Christian Hansen; Palushani, Evarist; Galili, Michael

    2011-01-01

    We propose a scheme enabling direct serial-to-parallel conversion of OTDM data tributaries onto a WDM grid, based on optical Fourier transformation with spectral compression. Demonstrations on 320 Gbit/s and 640 Gbit/s OTDM data are shown.......We propose a scheme enabling direct serial-to-parallel conversion of OTDM data tributaries onto a WDM grid, based on optical Fourier transformation with spectral compression. Demonstrations on 320 Gbit/s and 640 Gbit/s OTDM data are shown....

  8. Biomedical sensor design using analog compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2015-05-01

    The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.

  9. Compression force behaviours: An exploration of the beliefs and values influencing the application of breast compression during screening mammography

    International Nuclear Information System (INIS)

    Murphy, Fred; Nightingale, Julie; Hogg, Peter; Robinson, Leslie; Seddon, Doreen; Mackay, Stuart

    2015-01-01

    This research project investigated the compression behaviours of practitioners during screening mammography. The study sought to provide a qualitative understanding of ‘how’ and ‘why’ practitioners apply compression force. With a clear conflict in the existing literature and little scientific evidence base to support the reasoning behind the application of compression force, this research project investigated the application of compression using a phenomenological approach. Following ethical approval, six focus group interviews were conducted at six different breast screening centres in England. A sample of 41 practitioners were interviewed within the focus groups together with six one-to-one interviews of mammography educators or clinical placement co-ordinators. The findings revealed two broad humanistic and technological categories consisting of 10 themes. The themes included client empowerment, white-lies, time for interactions, uncertainty of own practice, culture, power, compression controls, digital technology, dose audit-safety nets, numerical scales. All of these themes were derived from 28 units of significant meaning (USM). The results demonstrate a wide variation in the application of compression force, thus offering a possible explanation for the difference between practitioner compression forces found in quantitative studies. Compression force was applied in many different ways due to individual practitioner experiences and behaviour. Furthermore, the culture and the practice of the units themselves influenced beliefs and attitudes of practitioners in compression force application. The strongest recommendation to emerge from this study was the need for peer observation to enable practitioners to observe and compare their own compression force practice to that of their colleagues. The findings are significant for clinical practice in order to understand how and why compression force is applied

  10. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    OpenAIRE

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with vari...

  11. Compressive Sampling for Non-Imaging Remote Classification

    Science.gov (United States)

    2013-10-22

    spectro -­‐polarization  imager,   a  compressive  coherence  imager  to  resolve  objects  through  turbulence...2    The  relay  lens  for   UV -­‐CASSI,  which  focuses  the  aperture  code  onto  the  monochrome  detector...below  in  Fig.  3,  with  a  silicon   UV  sensitive  detector  on  the  left,  and   a   UV

  12. Microbiological contamination of compressed air used in dentistry: an investigation.

    Science.gov (United States)

    Conte, M; Lynch, R M; Robson, M G

    2001-11-01

    The purpose of this preliminary investigation was twofold: 1) to examine the possibility of cross-contamination between a dental-evacuation system and the compressed air used in dental operatories and 2) to capture and identify the most common microflora in the compressed-air supply. The investigation used swab, water, and air sampling that was designed to track microorganisms from the evacuation system, through the air of the mechanical room, into the compressed-air system, and back to the patient. Samples taken in the vacuum system, the air space in the mechanical room, and the compressed-air storage tank had significantly higher total concentrations of bacteria than the outside air sampled. Samples of the compressed air returning to the operatory were found to match the outside air sample in total bacteria. It was concluded that the air dryer may have played a significant role in the elimination of microorganisms from the dental compressed-air supply.

  13. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    Energy Technology Data Exchange (ETDEWEB)

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  14. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  15. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  16. Hot-compress: A new postdeposition treatment for ZnO-based flexible dye-sensitized solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Haque Choudhury, Mohammad Shamimul, E-mail: shamimul129@gmail.com [Department of Frontier Material, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Aichi 466-8555 (Japan); Department of Electrical and Electronic Engineering, International Islamic University Chittagong, b154/a, College Road, Chittagong 4203 (Bangladesh); Kishi, Naoki; Soga, Tetsuo [Department of Frontier Material, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Aichi 466-8555 (Japan)

    2016-08-15

    Highlights: • A new postdeposition treatment named hot-compress is introduced. • Hot-compression gives homogeneous compact layer ZnO photoanode. • I-V and EIS analysis data confirms the efficacy of this method. • Charge transport resistance was reduced by the application of hot-compression. - Abstract: This article introduces a new postdeposition treatment named hot-compress for flexible zinc oxide–base dye-sensitized solar cells. This postdeposition treatment includes the application of compression pressure at an elevated temperature. The optimum compression pressure of 130 Ma at an optimum compression temperature of 70 °C heating gives better photovoltaic performance compared to the conventional cells. The aptness of this method was confirmed by investigating scanning electron microscopy image, X-ray diffraction, current-voltage and electrochemical impedance spectroscopy analysis of the prepared cells. Proper heating during compression lowers the charge transport resistance, longer the electron lifetime of the device. As a result, the overall power conversion efficiency of the device was improved about 45% compared to the conventional room temperature compressed cell.

  17. File compression and encryption based on LLS and arithmetic coding

    Science.gov (United States)

    Yu, Changzhi; Li, Hengjian; Wang, Xiyu

    2018-03-01

    e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.

  18. Prediction of compressibility parameters of the soils using artificial neural network.

    Science.gov (United States)

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan

    2016-01-01

    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  19. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  20. The possibilities of compressed-sensing-based Kirchhoff prestack migration

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali

    2014-01-01

    An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.

  1. The possibilities of compressed-sensing-based Kirchhoff prestack migration

    KAUST Repository

    Aldawood, Ali

    2014-05-08

    An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.

  2. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    Science.gov (United States)

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  3. Micro-Doppler Ambiguity Resolution Based on Short-Time Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Jing-bo Zhuang

    2015-01-01

    Full Text Available When using a long range radar (LRR to track a target with micromotion, the micro-Doppler embodied in the radar echoes may suffer from ambiguity problem. In this paper, we propose a novel method based on compressed sensing (CS to solve micro-Doppler ambiguity. According to the RIP requirement, a sparse probing pulse train with its transmitting time random is designed. After matched filtering, the slow-time echo signals of the micromotion target can be viewed as randomly sparse sampling of Doppler spectrum. Select several successive pulses to form a short-time window and the CS sensing matrix can be built according to the time stamps of these pulses. Then performing Orthogonal Matching Pursuit (OMP, the unambiguous micro-Doppler spectrum can be obtained. The proposed algorithm is verified using the echo signals generated according to the theoretical model and the signals with micro-Doppler signature produced using the commercial electromagnetic simulation software FEKO.

  4. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  5. Influence of chest compression artefact on capnogram-based ventilation detection during out-of-hospital cardiopulmonary resuscitation.

    Science.gov (United States)

    Leturiondo, Mikel; Ruiz de Gauna, Sofía; Ruiz, Jesus M; Julio Gutiérrez, J; Leturiondo, Luis A; González-Otero, Digna M; Russell, James K; Zive, Dana; Daya, Mohamud

    2018-03-01

    Capnography has been proposed as a method for monitoring the ventilation rate during cardiopulmonary resuscitation (CPR). A high incidence (above 70%) of capnograms distorted by chest compression induced oscillations has been previously reported in out-of-hospital (OOH) CPR. The aim of the study was to better characterize the chest compression artefact and to evaluate its influence on the performance of a capnogram-based ventilation detector during OOH CPR. Data from the MRx monitor-defibrillator were extracted from OOH cardiac arrest episodes. For each episode, presence of chest compression artefact was annotated in the capnogram. Concurrent compression depth and transthoracic impedance signals were used to identify chest compressions and to annotate ventilations, respectively. We designed a capnogram-based ventilation detection algorithm and tested its performance with clean and distorted episodes. Data were collected from 232 episodes comprising 52 654 ventilations, with a mean (±SD) of 227 (±118) per episode. Overall, 42% of the capnograms were distorted. Presence of chest compression artefact degraded algorithm performance in terms of ventilation detection, estimation of ventilation rate, and the ability to detect hyperventilation. Capnogram-based ventilation detection during CPR using our algorithm was compromised by the presence of chest compression artefact. In particular, artefact spanning from the plateau to the baseline strongly degraded ventilation detection, and caused a high number of false hyperventilation alarms. Further research is needed to reduce the impact of chest compression artefact on capnographic ventilation monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Blind Compressed Sensing Parameter Estimation of Non-cooperative Frequency Hopping Signal

    Directory of Open Access Journals (Sweden)

    Chen Ying

    2016-10-01

    Full Text Available To overcome the disadvantages of a non-cooperative frequency hopping communication system, such as a high sampling rate and inadequate prior information, parameter estimation based on Blind Compressed Sensing (BCS is proposed. The signal is precisely reconstructed by the alternating iteration of sparse coding and basis updating, and the hopping frequencies are directly estimated based on the results. Compared with conventional compressive sensing, blind compressed sensing does not require prior information of the frequency hopping signals; hence, it offers an effective solution to the inadequate prior information problem. In the proposed method, the signal is first modeled and then reconstructed by Orthonormal Block Diagonal Blind Compressed Sensing (OBD-BCS, and the hopping frequencies and hop period are finally estimated. The simulation results suggest that the proposed method can reconstruct and estimate the parameters of noncooperative frequency hopping signals with a low signal-to-noise ratio.

  7. A blended pressure/density based method for the computation of incompressible and compressible flows

    International Nuclear Information System (INIS)

    Rossow, C.-C.

    2003-01-01

    An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation

  8. Geothermally Coupled Well-Based Compressed Air Energy Storage

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, C L [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bearden, Mark D [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horner, Jacob A [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Appriou, Delphine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McGrail, B Peter [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-12-01

    . This project assessed the technical and economic feasibility of implementing geothermally coupled well-based CAES for grid-scale energy storage. Based on an evaluation of design specifications for a range of casing grades common in U.S. oil and gas fields, a 5-MW CAES project could be supported by twenty to twenty-five 5,000-foot, 7-inch wells using lower-grade casing, and as few as eight such wells for higher-end casing grades. Using this information, along with data on geothermal resources, well density, and potential future markets for energy storage systems, The Geysers geothermal field was selected to parameterize a case study to evaluate the potential match between the proven geothermal resource present at The Geysers and the field’s existing well infrastructure. Based on calculated wellbore compressed air mass, the study shows that a single average geothermal production well could provide enough geothermal energy to support a 15.4-MW (gross) power generation facility using 34 to 35 geothermal wells repurposed for compressed air storage, resulting in a simplified levelized cost of electricity (sLCOE) estimated at 11.2 ¢/kWh (Table S.1). Accounting for the power loss to the geothermal power project associated with diverting geothermal resources for air heating results in a net 2-MW decrease in generation capacity, increasing the CAES project’s sLCOE by 1.8 ¢/kWh.

  9. Geothermally Coupled Well-Based Compressed Air Energy Storage

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, Casie L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bearden, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Horner, Jacob A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cabe, James E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Appriou, Delphine [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McGrail, B. Peter [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-12-20

    . This project assessed the technical and economic feasibility of implementing geothermally coupled well-based CAES for grid-scale energy storage. Based on an evaluation of design specifications for a range of casing grades common in U.S. oil and gas fields, a 5-MW CAES project could be supported by twenty to twenty-five 5,000-foot, 7-inch wells using lower-grade casing, and as few as eight such wells for higher-end casing grades. Using this information, along with data on geothermal resources, well density, and potential future markets for energy storage systems, The Geysers geothermal field was selected to parameterize a case study to evaluate the potential match between the proven geothermal resource present at The Geysers and the field’s existing well infrastructure. Based on calculated wellbore compressed air mass, the study shows that a single average geothermal production well could provide enough geothermal energy to support a 15.4-MW (gross) power generation facility using 34 to 35 geothermal wells repurposed for compressed air storage, resulting in a simplified levelized cost of electricity (sLCOE) estimated at 11.2 ¢/kWh (Table S.1). Accounting for the power loss to the geothermal power project associated with diverting geothermal resources for air heating results in a net 2-MW decrease in generation capacity, increasing the CAES project’s sLCOE by 1.8 ¢/kWh.

  10. Chloride transport under compressive load in bacteria-based self-healing concrete

    NARCIS (Netherlands)

    Binti Md Yunus, B.; Schlangen, E.; Jonkers, H.M.

    2015-01-01

    An experiment was carried out in this study to investigate the effect of compressive load on chloride penetration in self-healing concrete containing bacterial-based healing agent. Bacteria-based healing agent with the fraction of 2 mm – 4 mm of particles sizes were used in this contribution. ESEM

  11. Development and evaluation of a device for simultaneous uniaxial compression and optical imaging of cartilage samples in vitro

    Energy Technology Data Exchange (ETDEWEB)

    Steinert, Marian; Kratz, Marita; Jones, David B. [Department of Experimental Orthopaedics and Biomechanics, Philipps University Marburg, Baldingerstr., 35043 Marburg (Germany); Jaedicke, Volker; Hofmann, Martin R. [Photonics and Terahertz Technology, Ruhr University Bochum, Universitätsstr. 150, 44801 Bochum (Germany)

    2014-10-15

    In this paper, we present a system that allows imaging of cartilage tissue via optical coherence tomography (OCT) during controlled uniaxial unconfined compression of cylindrical osteochondral cores in vitro. We describe the system design and conduct a static and dynamic performance analysis. While reference measurements yield a full scale maximum deviation of 0.14% in displacement, force can be measured with a full scale standard deviation of 1.4%. The dynamic performance evaluation indicates a high accuracy in force controlled mode up to 25 Hz, but it also reveals a strong effect of variance of sample mechanical properties on the tracking performance under displacement control. In order to counterbalance these disturbances, an adaptive feed forward approach was applied which finally resulted in an improved displacement tracking accuracy up to 3 Hz. A built-in imaging probe allows on-line monitoring of the sample via OCT while being loaded in the cultivation chamber. We show that cartilage topology and defects in the tissue can be observed and demonstrate the visualization of the compression process during static mechanical loading.

  12. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    Science.gov (United States)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  13. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    International Nuclear Information System (INIS)

    Xiao Di; Cai Hong-Kun; Zheng Hong-Ying

    2015-01-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. (paper)

  14. Edge-based compression of cartoon-like images with homogeneous diffusion

    DEFF Research Database (Denmark)

    Mainberger, Markus; Bruhn, Andrés; Weickert, Joachim

    2011-01-01

    Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compressio...

  15. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  16. Compression Characteristics of Solid Wastes as Backfill Materials

    OpenAIRE

    Meng Li; Jixiong Zhang; Rui Gao

    2016-01-01

    A self-made large-diameter compression steel chamber and a SANS material testing machine were chosen to perform a series of compression tests in order to fully understand the compression characteristics of differently graded filling gangue samples. The relationship between the stress-deformation modulus and stress-compression degree was analyzed comparatively. The results showed that, during compression, the deformation modulus of gangue grew linearly with stress, the overall relationship bet...

  17. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  18. Mechanical properties of tannin-based rigid foams undergoing compression

    Energy Technology Data Exchange (ETDEWEB)

    Celzard, A., E-mail: Alain.Celzard@enstib.uhp-nancy.fr [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Zhao, W. [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Pizzi, A. [ENSTIB-LERMAB, Nancy-University, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France); Fierro, V. [Institut Jean Lamour - UMR CNRS 7198, CNRS - Nancy-Universite - UPV-Metz, Departement Chimie et Physique des Solides et des Surfaces, ENSTIB, 27 rue du Merle Blanc, BP 1041, 88051 Epinal cedex 9 (France)

    2010-06-25

    The mechanical properties of a new class of extremely lightweight tannin-based materials, namely organic foams and their carbonaceous counterparts are detailed. Scaling laws are shown to describe correctly the observed behaviour. Information about the mechanical characteristics of the elementary forces acting within these solids is derived. It is suggested that organic materials present a rather bending-dominated behaviour and are partly plastic. On the contrary, carbon foams obtained by pyrolysis of the former present a fracture-dominated behaviour and are purely brittle. These conclusions are supported by the differences in the exponent describing the change of Young's modulus as a function of relative density, while that describing compressive strength is unchanged. Features of the densification strain also support such conclusions. Carbon foams of very low density may absorb high energy when compressed, making them valuable materials for crash protection.

  19. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2012-01-01

    Full Text Available An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  20. Compressive sensing based algorithms for electronic defence

    CERN Document Server

    Mishra, Amit Kumar

    2017-01-01

    This book details some of the major developments in the implementation of compressive sensing in radio applications for electronic defense and warfare communication use. It provides a comprehensive background to the subject and at the same time describes some novel algorithms. It also investigates application value and performance-related parameters of compressive sensing in scenarios such as direction finding, spectrum monitoring, detection, and classification.

  1. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  2. Influence of breast compression pressure on the performance of population-based mammography screening.

    Science.gov (United States)

    Holland, Katharina; Sechopoulos, Ioannis; Mann, Ritse M; den Heeten, Gerard J; van Gils, Carla H; Karssemeijer, Nico

    2017-11-28

    In mammography, breast compression is applied to reduce the thickness of the breast. While it is widely accepted that firm breast compression is needed to ensure acceptable image quality, guidelines remain vague about how much compression should be applied during mammogram acquisition. A quantitative parameter indicating the desirable amount of compression is not available. Consequently, little is known about the relationship between the amount of breast compression and breast cancer detectability. The purpose of this study is to determine the effect of breast compression pressure in mammography on breast cancer screening outcomes. We used digital image analysis methods to determine breast volume, percent dense volume, and pressure from 132,776 examinations of 57,179 women participating in the Dutch population-based biennial breast cancer screening program. Pressure was estimated by dividing the compression force by the area of the contact surface between breast and compression paddle. The data was subdivided into quintiles of pressure and the number of screen-detected cancers, interval cancers, false positives, and true negatives were determined for each group. Generalized estimating equations were used to account for correlation between examinations of the same woman and for the effect of breast density and volume when estimating sensitivity, specificity, and other performance measures. Sensitivity was computed using interval cancers occurring between two screening rounds and using interval cancers within 12 months after screening. Pair-wise testing for significant differences was performed. Percent dense volume increased with increasing pressure, while breast volume decreased. Sensitivity in quintiles with increasing pressure was 82.0%, 77.1%, 79.8%, 71.1%, and 70.8%. Sensitivity based on interval cancers within 12 months was significantly lower in the highest pressure quintile compared to the third (84.3% vs 93.9%, p = 0.034). Specificity was lower in the

  3. An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Xiong Jintao

    2016-01-01

    Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.

  4. The Effect of Alkaline Activator Ratio on the Compressive Strength of Fly Ash-Based Geopolymer Paste

    Science.gov (United States)

    Lăzărescu, A. V.; Szilagyi, H.; Baeră, C.; Ioani, A.

    2017-06-01

    Alkaline activation of fly ash is a particular procedure in which ash resulting from a power plant combined with a specific alkaline activator creates a solid material when dried at a certain temperature. In order to obtain desirable compressive strengths, the mix design of fly ash based geopolymer pastes should be explored comprehensively. To determine the preliminary compressive strength for fly ash based geopolymer paste using Romanian material source, various ratios of Na2SiO3 solution/ NaOH solution were produced, keeping the fly ash/alkaline activator ratio constant. All the mixes were then cured at 70 °C for 24 hours and tested at 2 and 7 days, respectively. The aim of this paper is to present the preliminary compressive strength results for producing fly ash based geopolymer paste using Romanian material sources, the effect of alkaline activators ratio on the compressive strength and studying the directions for future research.

  5. An Adaptive Joint Sparsity Recovery for Compressive Sensing Based EEG System

    Directory of Open Access Journals (Sweden)

    Hamza Djelouat

    2017-01-01

    Full Text Available The last decade has witnessed tremendous efforts to shape the Internet of things (IoT platforms to be well suited for healthcare applications. These platforms are comprised of a network of wireless sensors to monitor several physical and physiological quantities. For instance, long-term monitoring of brain activities using wearable electroencephalogram (EEG sensors is widely exploited in the clinical diagnosis of epileptic seizures and sleeping disorders. However, the deployment of such platforms is challenged by the high power consumption and system complexity. Energy efficiency can be achieved by exploring efficient compression techniques such as compressive sensing (CS. CS is an emerging theory that enables a compressed acquisition using well-designed sensing matrices. Moreover, system complexity can be optimized by using hardware friendly structured sensing matrices. This paper quantifies the performance of a CS-based multichannel EEG monitoring. In addition, the paper exploits the joint sparsity of multichannel EEG using subspace pursuit (SP algorithm as well as a designed sparsifying basis in order to improve the reconstruction quality. Furthermore, the paper proposes a modification to the SP algorithm based on an adaptive selection approach to further improve the performance in terms of reconstruction quality, execution time, and the robustness of the recovery process.

  6. The Physics of Compressive Sensing and the Gradient-Based Recovery Algorithms

    OpenAIRE

    Dai, Qi; Sha, Wei

    2009-01-01

    The physics of compressive sensing (CS) and the gradient-based recovery algorithms are presented. First, the different forms for CS are summarized. Second, the physical meanings of coherence and measurement are given. Third, the gradient-based recovery algorithms and their geometry explanations are provided. Finally, we conclude the report and give some suggestion for future work.

  7. Compressive strength and hydrolytic stability of fly ash based geopolymers

    Directory of Open Access Journals (Sweden)

    Nikolić Irena

    2013-01-01

    Full Text Available The process of geopolymerization involves the reaction of solid aluminosilicate materials with highly alkaline silicate solution yielding an aluminosilicate inorganic polymer named geopolymer, which may be successfully applied in civil engineering as a replacement for cement. In this paper we have investigated the influence of synthesis parameters: solid to liquid ratio, NaOH concentration and the ratio of Na2SiO3/NaOH, on the mechanical properties and hydrolytic stability of fly ash based geopolymers in distilled water, sea water and simulated acid rain. The highest value of compressive strength was obtained using 10 mol dm-3 NaOH and at the Na2SiO3/NaOH ratio of 1.5. Moreover, the results have shown that mechanical properties of fly ash based geopolymers are in correlation with their hydrolytic stability. Factors that increase the compressive strength also increase the hydrolytic stability of fly ash based geopolymers. The best hydrolytic stability of fly ash based geopolymers was shown in sea water while the lowest stability was recorded in simulated acid rain. [Projekat Ministarstva nauke Republike Srbije, br. 172054 i Nanotechnology and Functional Materials Center, funded by the European FP7 project No. 245916

  8. Spectral Interpolation on 3 x 3 Stencils for Prediction and Compression

    Energy Technology Data Exchange (ETDEWEB)

    Ibarria, L; Lindstrom, P; Rossignac, J

    2007-06-25

    Many scientific, imaging, and geospatial applications produce large high-precision scalar fields sampled on a regular grid. Lossless compression of such data is commonly done using predictive coding, in which weighted combinations of previously coded samples known to both encoder and decoder are used to predict subsequent nearby samples. In hierarchical, incremental, or selective transmission, the spatial pattern of the known neighbors is often irregular and varies from one sample to the next, which precludes prediction based on a single stencil and fixed set of weights. To handle such situations and make the best use of available neighboring samples, we propose a local spectral predictor that offers optimal prediction by tailoring the weights to each configuration of known nearby samples. These weights may be precomputed and stored in a small lookup table. We show through several applications that predictive coding using our spectral predictor improves compression for various sources of high-precision data.

  9. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  10. PET image reconstruction with rotationally symmetric polygonal pixel grid based highly compressible system matrix

    International Nuclear Information System (INIS)

    Yu Yunhan; Xia Yan; Liu Yaqiang; Wang Shi; Ma Tianyu; Chen Jing; Hong Baoyu

    2013-01-01

    To achieve a maximum compression of system matrix in positron emission tomography (PET) image reconstruction, we proposed a polygonal image pixel division strategy in accordance with rotationally symmetric PET geometry. Geometrical definition and indexing rule for polygonal pixels were established. Image conversion from polygonal pixel structure to conventional rectangular pixel structure was implemented using a conversion matrix. A set of test images were analytically defined in polygonal pixel structure, converted to conventional rectangular pixel based images, and correctly displayed which verified the correctness of the image definition, conversion description and conversion of polygonal pixel structure. A compressed system matrix for PET image recon was generated by tap model and tested by forward-projecting three different distributions of radioactive sources to the sinogram domain and comparing them with theoretical predictions. On a practical small animal PET scanner, a compress ratio of 12.6:1 of the system matrix size was achieved with the polygonal pixel structure, comparing with the conventional rectangular pixel based tap-mode one. OS-EM iterative image reconstruction algorithms with the polygonal and conventional Cartesian pixel grid were developed. A hot rod phantom was detected and reconstructed based on these two grids with reasonable time cost. Image resolution of reconstructed images was both 1.35 mm. We conclude that it is feasible to reconstruct and display images in a polygonal image pixel structure based on a compressed system matrix in PET image reconstruction. (authors)

  11. Compressed Sensing with Rank Deficient Dictionaries

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Johansen, Daniel Højrup; Jørgensen, Peter Bjørn

    2012-01-01

    In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, i.e. rank deficient dictionaries. We show that in this case the signal-to-noise ratio...... (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary. As an example application of compressed sensing with a rank deficient dictionary, we present a case study of compressed sensing applied to the Coarse Acquisition (C...

  12. The Formation and Evolution of Shear Bands in Plane Strain Compressed Nickel-Base Superalloy

    Directory of Open Access Journals (Sweden)

    Bin Tang

    2018-02-01

    Full Text Available The formation and evolution of shear bands in Inconel 718 nickel-base superalloy under plane strain compression was investigated in the present work. It is found that the propagation of shear bands under plane strain compression is more intense in comparison with conventional uniaxial compression. The morphology of shear bands was identified to generally fall into two categories: in “S” shape at severe conditions (low temperatures and high strain rates and “X” shape at mild conditions (high temperatures and low strain rates. However, uniform deformation at the mesoscale without shear bands was also obtained by compressing at 1050 °C/0.001 s−1. By using the finite element method (FEM, the formation mechanism of the shear bands in the present study was explored for the special deformation mode of plane strain compression. Furthermore, the effect of processing parameters, i.e., strain rate and temperature, on the morphology and evolution of shear bands was discussed following a phenomenological approach. The plane strain compression attempt in the present work yields important information for processing parameters optimization and failure prediction under plane strain loading conditions of the Inconel 718 superalloy.

  13. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    Science.gov (United States)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  14. POINT-CLOUD COMPRESSION FOR VEHICLE-BASED MOBILE MAPPING SYSTEMS USING PORTABLE NETWORK GRAPHICS

    Directory of Open Access Journals (Sweden)

    K. Kohira

    2017-09-01

    Full Text Available A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects.Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  15. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  16. Configuring and Characterizing X-Rays for Laser-Driven Compression Experiments at the Dynamic Compression Sector

    Science.gov (United States)

    Li, Y.; Capatina, D.; D'Amico, K.; Eng, P.; Hawreliak, J.; Graber, T.; Rickerson, D.; Klug, J.; Rigg, P. A.; Gupta, Y. M.

    2017-06-01

    Coupling laser-driven compression experiments to the x-ray beam at the Dynamic Compression Sector (DCS) at the Advanced Photon Source (APS) of Argonne National Laboratory requires state-of-the-art x-ray focusing, pulse isolation, and diagnostics capabilities. The 100J UV pulsed laser system can be fired once every 20 minutes so precise alignment and focusing of the x-rays on each new sample must be fast and reproducible. Multiple Kirkpatrick-Baez (KB) mirrors are used to achieve a focal spot size as small as 50 μm at the target, while the strategic placement of scintillating screens, cameras, and detectors allows for fast diagnosis of the beam shape, intensity, and alignment of the sample to the x-ray beam. In addition, a series of x-ray choppers and shutters are used to ensure that the sample is exposed to only a single x-ray pulse ( 80ps) during the dynamic compression event and require highly precise synchronization. Details of the technical requirements, layout, and performance of these instruments will be presented. Work supported by DOE/NNSA.

  17. Statistical Analysis of Compression Methods for Storing Binary Image for Low-Memory Systems

    Directory of Open Access Journals (Sweden)

    Roman Slaby

    2013-01-01

    Full Text Available The paper is focused on the statistical comparison of the selected compression methods which are used for compression of the binary images. The aim is to asses, which of presented compression method for low-memory system requires less number of bytes of memory. For assessment of the success rates of the input image to binary image the correlation functions are used. Correlation function is one of the methods of OCR algorithm used for the digitization of printed symbols. Using of compression methods is necessary for systems based on low-power micro-controllers. The data stream saving is very important for such systems with limited memory as well as the time required for decoding the compressed data. The success rate of the selected compression algorithms is evaluated using the basic characteristics of the exploratory analysis. The searched samples represent the amount of bytes needed to compress the test images, representing alphanumeric characters.

  18. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  19. An efficient algorithm for MR image reconstruction and compression

    International Nuclear Information System (INIS)

    Wang, Hang; Rosenfeld, D.; Braun, M.; Yan, Hong

    1992-01-01

    In magnetic resonance imaging (MRI), the original data are sampled in the spatial frequency domain. The sampled data thus constitute a set of discrete Fourier transform (DFT) coefficients. The image is usually reconstructed by taking inverse DFT. The image data may then be efficiently compressed using the discrete cosine transform (DCT). A method of using DCT to treat the sampled data is presented which combines two procedures, image reconstruction and data compression. This method may be particularly useful in medical picture archiving and communication systems where both image reconstruction and compression are important issues. 11 refs., 3 figs

  20. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  1. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  2. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    Science.gov (United States)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  3. Shock compression of synthetic opal

    International Nuclear Information System (INIS)

    Inoue, A; Okuno, M; Okudera, H; Mashimo, T; Omurzak, E; Katayama, S; Koyano, M

    2010-01-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO 4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO 2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  4. Shock compression of synthetic opal

    Science.gov (United States)

    Inoue, A.; Okuno, M.; Okudera, H.; Mashimo, T.; Omurzak, E.; Katayama, S.; Koyano, M.

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  5. Shock compression of synthetic opal

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, A; Okuno, M; Okudera, H [Department of Earth Sciences, Kanazawa University Kanazawa, Ishikawa, 920-1192 (Japan); Mashimo, T; Omurzak, E [Shock Wave and Condensed Matter Research Center, Kumamoto University, Kumamoto, 860-8555 (Japan); Katayama, S; Koyano, M, E-mail: okuno@kenroku.kanazawa-u.ac.j [JAIST, Nomi, Ishikawa, 923-1297 (Japan)

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO{sub 4} tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO{sub 2} glass. However, internal silanole groups still remain even at 38.1 GPa.

  6. Quasi Gradient Projection Algorithm for Sparse Reconstruction in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Xin Meng

    2014-02-01

    Full Text Available Compressed sensing is a novel signal sampling theory under the condition that the signal is sparse or compressible. The existing recovery algorithms based on the gradient projection can either need prior knowledge or recovery the signal poorly. In this paper, a new algorithm based on gradient projection is proposed, which is referred as Quasi Gradient Projection. The algorithm presented quasi gradient direction and two step sizes schemes along this direction. The algorithm doesn’t need any prior knowledge of the original signal. Simulation results demonstrate that the presented algorithm cans recovery the signal more correctly than GPSR which also don’t need prior knowledge. Meanwhile, the algorithm has a lower computation complexity.

  7. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  8. Web-based tool for subjective observer ranking of compressed medical images

    Science.gov (United States)

    Langer, Steven G.; Stewart, Brent K.; Andrew, Rex K.

    1999-05-01

    In the course of evaluating various compression schemes for ultrasound teleradiology applications, it became obvious that paper based methods of data collection were time consuming and error prone. A method was sought which allowed participating radiologists to view the ultrasound video clips (compressed to varying degree) at their desks. Furthermore, the method should allow observers to enter their evaluations and when finished, automatically submit the data to our statistical analysis engine. We have found the World Wide Web offered a ready solution. A web page was constructed that contains 18 embedded AVI video clips. The 18 clips represent 6 distinct anatomical areas, compressed by various methods and amounts, and then randomly distributed through the web page. To the right of each video, a series of questions are presented which ask the observer to rank (1 - 5) his/her ability to answer diagnostically relevant questions. When completed, the observer presses 'Submit' and a file of tab delimited test is created which can then be imported to an Excel workbook. Kappa analysis is then performed and the resulting plots demonstrate observer preferences.

  9. Photonic compressive sensing enabled data efficient time stretch optical coherence tomography

    Science.gov (United States)

    Mididoddi, Chaitanya K.; Wang, Chao

    2018-03-01

    Photonic time stretch (PTS) has enabled real time spectral domain optical coherence tomography (OCT). However, this method generates a torrent of massive data at GHz stream rate, which requires capturing as per Nyquist principle. If the OCT interferogram signal is sparse in Fourier domain, which is always true for samples with limited number of layers, it can be captured at lower (sub-Nyquist) acquisition rate as per compressive sensing method. In this work we report a data compressed PTS-OCT system based on photonic compressive sensing with 66% compression with low acquisition rate of 50MHz and measurement speed of 1.51MHz per depth profile. A new method has also been proposed to improve the system with all-optical random pattern generation, which completely avoids electronic bottleneck in traditional binary pseudorandom binary sequence (PRBS) generators.

  10. Strength and deformability of compressed concrete elements with various types of non-metallic fiber and rods reinforcement under static loading

    Science.gov (United States)

    Nevskii, A. V.; Baldin, I. V.; Kudyakov, K. L.

    2015-01-01

    Adoption of modern building materials based on non-metallic fibers and their application in concrete structures represent one of the important issues in construction industry. This paper presents results of investigation of several types of raw materials selected: basalt fiber, carbon fiber and composite fiber rods based on glass and carbon. Preliminary testing has shown the possibility of raw materials to be effectively used in compressed concrete elements. Experimental program to define strength and deformability of compressed concrete elements with non-metallic fiber reinforcement and rod composite reinforcement included design, manufacture and testing of several types of concrete samples with different types of fiber and longitudinal rod reinforcement. The samples were tested under compressive static load. The results demonstrated that fiber reinforcement of concrete allows increasing carrying capacity of compressed concrete elements and reducing their deformability. Using composite longitudinal reinforcement instead of steel longitudinal reinforcement in compressed concrete elements insignificantly influences bearing capacity. Combined use of composite rod reinforcement and fiber reinforcement in compressed concrete elements enables to achieve maximum strength and minimum deformability.

  11. A Review On Segmentation Based Image Compression Techniques

    Directory of Open Access Journals (Sweden)

    S.Thayammal

    2013-11-01

    Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.

  12. On the implicit density based OpenFOAM solver for turbulent compressible flows

    Science.gov (United States)

    Fürst, Jiří

    The contribution deals with the development of coupled implicit density based solver for compressible flows in the framework of open source package OpenFOAM. However the standard distribution of OpenFOAM contains several ready-made segregated solvers for compressible flows, the performance of those solvers is rather week in the case of transonic flows. Therefore we extend the work of Shen [15] and we develop an implicit semi-coupled solver. The main flow field variables are updated using lower-upper symmetric Gauss-Seidel method (LU-SGS) whereas the turbulence model variables are updated using implicit Euler method.

  13. Pan-sharpening via compressed superresolution reconstruction and multidictionary learning

    Science.gov (United States)

    Shi, Cheng; Liu, Fang; Li, Lingling; Jiao, Licheng; Hao, Hongxia; Shang, Ronghua; Li, Yangyang

    2018-01-01

    In recent compressed sensing (CS)-based pan-sharpening algorithms, pan-sharpening performance is affected by two key problems. One is that there are always errors between the high-resolution panchromatic (HRP) image and the linear weighted high-resolution multispectral (HRM) image, resulting in spatial and spectral information lost. The other is that the dictionary construction process depends on the nontruth training samples. These problems have limited applications to CS-based pan-sharpening algorithm. To solve these two problems, we propose a pan-sharpening algorithm via compressed superresolution reconstruction and multidictionary learning. Through a two-stage implementation, compressed superresolution reconstruction model reduces the error effectively between the HRP and the linear weighted HRM images. Meanwhile, the multidictionary with ridgelet and curvelet is learned for both the two stages in the superresolution reconstruction process. Since ridgelet and curvelet can better capture the structure and directional characteristics, a better reconstruction result can be obtained. Experiments are done on the QuickBird and IKONOS satellites images. The results indicate that the proposed algorithm is competitive compared with the recent CS-based pan-sharpening methods and other well-known methods.

  14. Study and analysis of wavelet based image compression techniques

    African Journals Online (AJOL)

    user

    Discrete Wavelet Transform (DWT) is a recently developed compression ... serve emerging areas of mobile multimedia and internet communication, ..... In global thresholding the best trade-off between PSNR and compression is provided by.

  15. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    Directory of Open Access Journals (Sweden)

    J. Schindler

    2011-01-01

    Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

  16. Evaluation of simulation-based training on the ability of birth attendants to correctly perform bimanual compression as obstetric first aid.

    Science.gov (United States)

    Andreatta, Pamela; Gans-Larty, Florence; Debpuur, Domitilla; Ofosu, Anthony; Perosky, Joseph

    2011-10-01

    Maternal mortality from postpartum hemorrhage remains high globally, in large part because women give birth in rural communities where unskilled (traditional birth attendants) provide care for delivering mothers. Traditional attendants are neither trained nor equipped to recognize or manage postpartum hemorrhage as a life-threatening emergent condition. Recommended treatment includes using uterotonic agents and physical manipulation to aid uterine contraction. In resource-limited areas where Obstetric first aid may be the only care option, physical methods such as bimanual uterine compression are easily taught, highly practical and if performed correctly, highly effective. A simulator with objective performance feedback was designed to teach skilled and unskilled birth attendants to perform the technique. To evaluate the impact of simulation-based training on the ability of birth attendants to correctly perform bimanual compression in response to postpartum hemorrhage from uterine atony. Simulation-based training was conducted for skilled (N=111) and unskilled birth attendants (N=14) at two regional (Kumasi, Tamale) and two district (Savelugu, Sene) medical centers in Ghana. Training was evaluated using Kirkpatrick's 4-level model. All participants significantly increased their bimanual uterine compression skills after training (p=0.000). There were no significant differences between 2-week delayed post-test performances indicating retention (p=0.52). Applied behavioral and clinical outcomes were reported for 9 months from a subset of birth attendants in Sene District: 425 births, 13 postpartum hemorrhages were reported without concomitant maternal mortality. The results of this study suggest that simulation-based training for skilled and unskilled birth attendants to perform bi-manual uterine compression as postpartum hemorrhage Obstetric first aid leads to improved applied procedural skills. Results from a smaller subset of the sample suggest that these skills

  17. Compressed Sensing, Pseudodictionary-Based, Superresolution Reconstruction

    Directory of Open Access Journals (Sweden)

    Chun-mei Li

    2016-01-01

    Full Text Available The spatial resolution of digital images is the critical factor that affects photogrammetry precision. Single-frame, superresolution, image reconstruction is a typical underdetermined, inverse problem. To solve this type of problem, a compressive, sensing, pseudodictionary-based, superresolution reconstruction method is proposed in this study. The proposed method achieves pseudodictionary learning with an available low-resolution image and uses the K-SVD algorithm, which is based on the sparse characteristics of the digital image. Then, the sparse representation coefficient of the low-resolution image is obtained by solving the norm of l0 minimization problem, and the sparse coefficient and high-resolution pseudodictionary are used to reconstruct image tiles with high resolution. Finally, single-frame-image superresolution reconstruction is achieved. The proposed method is applied to photogrammetric images, and the experimental results indicate that the proposed method effectively increase image resolution, increase image information content, and achieve superresolution reconstruction. The reconstructed results are better than those obtained from traditional interpolation methods in aspect of visual effects and quantitative indicators.

  18. Compression-based geometric pattern discovery in music

    DEFF Research Database (Denmark)

    Meredith, David

    2014-01-01

    The purpose of musical analysis is to find the best possible explanations for musical objects, where such objects may range from single chords or phrases to entire musical corpora. Kolmogorov complexity theory suggests that the best possible explanation for an object is represented by the shortest...... possible description of it. Two compression algorithms, COSIATEC and SIATECCompress, are described that take point-set representations of musical objects as input and generate compressed encodings of these point sets as output. The algorithms were evaluated on a task in which 360 folk songs were classified...

  19. Dynamic failure of dry and fully saturated limestone samples based on incubation time concept

    Directory of Open Access Journals (Sweden)

    Yuri V. Petrov

    2017-02-01

    Full Text Available This paper outlines the results of experimental study of the dynamic rock failure based on the comparison of dry and saturated limestone samples obtained during the dynamic compression and split tests. The tests were performed using the Kolsky method and its modifications for dynamic splitting. The mechanical data (e.g. strength, time and energy characteristics of this material at high strain rates are obtained. It is shown that these characteristics are sensitive to the strain rate. A unified interpretation of these rate effects, based on the structural–temporal approach, is hereby presented. It is demonstrated that the temporal dependence of the dynamic compressive and split tensile strengths of dry and saturated limestone samples can be predicted by the incubation time criterion. Previously discovered possibilities to optimize (minimize the energy input for the failure process is discussed in connection with industrial rock failure processes. It is shown that the optimal energy input value associated with critical load, which is required to initialize failure in the rock media, strongly depends on the incubation time and the impact duration. The optimal load shapes, which minimize the momentum for a single failure impact, are demonstrated. Through this investigation, a possible approach to reduce the specific energy required for rock cutting by means of high-frequency vibrations is also discussed.

  20. A lossless multichannel bio-signal compression based on low-complexity joint coding scheme for portable medical devices.

    Science.gov (United States)

    Kim, Dong-Sun; Kwon, Jin-San

    2014-09-18

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor.

  1. Compression-based aggregation model for medical web services.

    Science.gov (United States)

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.

  2. Adaptive compressive learning for prediction of protein-protein interactions from primary sequence.

    Science.gov (United States)

    Zhang, Ya-Nan; Pan, Xiao-Yong; Huang, Yan; Shen, Hong-Bin

    2011-08-21

    Protein-protein interactions (PPIs) play an important role in biological processes. Although much effort has been devoted to the identification of novel PPIs by integrating experimental biological knowledge, there are still many difficulties because of lacking enough protein structural and functional information. It is highly desired to develop methods based only on amino acid sequences for predicting PPIs. However, sequence-based predictors are often struggling with the high-dimensionality causing over-fitting and high computational complexity problems, as well as the redundancy of sequential feature vectors. In this paper, a novel computational approach based on compressed sensing theory is proposed to predict yeast Saccharomyces cerevisiae PPIs from primary sequence and has achieved promising results. The key advantage of the proposed compressed sensing algorithm is that it can compress the original high-dimensional protein sequential feature vector into a much lower but more condensed space taking the sparsity property of the original signal into account. What makes compressed sensing much more attractive in protein sequence analysis is its compressed signal can be reconstructed from far fewer measurements than what is usually considered necessary in traditional Nyquist sampling theory. Experimental results demonstrate that proposed compressed sensing method is powerful for analyzing noisy biological data and reducing redundancy in feature vectors. The proposed method represents a new strategy of dealing with high-dimensional protein discrete model and has great potentiality to be extended to deal with many other complicated biological systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Dynamic compressive properties and failure mechanism of glass fiber reinforced silica hydrogel

    International Nuclear Information System (INIS)

    Yang Jie; Li Shukui; Yan Lili; Huo Dongmei; Wang Fuchi

    2010-01-01

    The dynamic compressive properties of glass fiber reinforced silica (GFRS) hydrogel were investigated using a spilt Hopkinson pressure bar. Failure mechanism of GFRS hydrogel was studied by scanning electron microscopy (SEM). Result showed that dynamic compressive stresses were much higher than the quasi-static compressive stresses at the same strain. The dynamic compressive strength was directly proportional to the strain rate with same sample dimensions. The dynamic compressive strength was directly proportional to the sample basal area at same strain rate. Dynamic compressive failure strain was small. At high strain rates, glass fibers broke down and separated from the matrix, pores shrank rapidly. Failure resulted from the increase of lateral tensile stress in hydrogel under dynamic compression.

  4. Mammography parameters: compression, dose, and discomfort

    International Nuclear Information System (INIS)

    Blanco, S.; Di Risio, C.; Andisco, D.; Rojas, R.R.; Rojas, R.M.

    2017-01-01

    Objective: To confirm the importance of compression in mammography and relate it to the discomfort expressed by the patients. Materials and methods: Two samples of 402 and 268 mammographies were obtained from two diagnostic centres that use the same mammographic equipment, but different compression techniques. The patient age range was from 21 to 50 years old. (authors) [es

  5. A new chest compression depth feedback algorithm for high-quality CPR based on smartphone.

    Science.gov (United States)

    Song, Yeongtak; Oh, Jaehoon; Chee, Youngjoon

    2015-01-01

    Although many smartphone application (app) programs provide education and guidance for basic life support, they do not commonly provide feedback on the chest compression depth (CCD) and rate. The validation of its accuracy has not been reported to date. This study was a feasibility assessment of use of the smartphone as a CCD feedback device. In this study, we proposed the concept of a new real-time CCD estimation algorithm using a smartphone and evaluated the accuracy of the algorithm. Using the double integration of the acceleration signal, which was obtained from the accelerometer in the smartphone, we estimated the CCD in real time. Based on its periodicity, we removed the bias error from the accelerometer. To evaluate this instrument's accuracy, we used a potentiometer as the reference depth measurement. The evaluation experiments included three levels of CCD (insufficient, adequate, and excessive) and four types of grasping orientations with various compression directions. We used the difference between the reference measurement and the estimated depth as the error. The error was calculated for each compression. When chest compressions were performed with adequate depth for the patient who was lying on a flat floor, the mean (standard deviation) of the errors was 1.43 (1.00) mm. When the patient was lying on an oblique floor, the mean (standard deviation) of the errors was 3.13 (1.88) mm. The error of the CCD estimation was tolerable for the algorithm to be used in the smartphone-based CCD feedback app to compress more than 51 mm, which is the 2010 American Heart Association guideline.

  6. Detection of rebars in concrete using advanced ultrasonic pulse compression techniques.

    Science.gov (United States)

    Laureti, S; Ricci, M; Mohamed, M N I B; Senni, L; Davis, L A J; Hutchins, D A

    2018-04-01

    A pulse compression technique has been developed for the non-destructive testing of concrete samples. Scattering of signals from aggregate has historically been a problem in such measurements. Here, it is shown that a combination of piezocomposite transducers, pulse compression and post processing can lead to good images of a reinforcement bar at a cover depth of 55 mm. This has been achieved using a combination of wide bandwidth operation over the 150-450 kHz range, and processing based on measuring the cumulative energy scattered back to the receiver. Results are presented in the form of images of a 20 mm rebar embedded within a sample containing 10 mm aggregate. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Experimental research of the influence of the strength of ore samples on the parameters of an electromagnetic signal during acoustic excitation in the process of uniaxial compression

    Science.gov (United States)

    Yavorovich, L. V.; Bespal`ko, A. A.; Fedotov, P. I.

    2018-01-01

    Parameters of electromagnetic responses (EMRe) generated during uniaxial compression of rock samples under excitation by deterministic acoustic pulses are presented and discussed. Such physical modeling in the laboratory allows to reveal the main regularities of electromagnetic signals (EMS) generation in rock massive. The influence of the samples mechanical properties on the parameters of the EMRe excited by an acoustic signal in the process of uniaxial compression is considered. It has been established that sulfides and quartz in the rocks of the Tashtagol iron ore deposit (Western Siberia, Russia) contribute to the conversion of mechanical energy into the energy of the electromagnetic field, which is expressed in an increase in the EMS amplitude. The decrease in the EMS amplitude when the stress-strain state of the sample changes during the uniaxial compression is observed when the amount of conductive magnetite contained in the rock is increased. The obtained results are important for the physical substantiation of testing methods and monitoring of changes in the stress-strain state of the rock massive by the parameters of electromagnetic signals and the characteristics of electromagnetic emission.

  8. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    Directory of Open Access Journals (Sweden)

    Christian Schou Oxvig

    2014-10-01

    Full Text Available Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also provides researchers in compressed sensing with a selection of algorithms for reconstructing undersampled general images, and offers a consistent and rigorous way to efficiently evaluate the researchers own developed reconstruction algorithms in terms of phase transitions. The package also serves as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research.

  9. Effective Low-Power Wearable Wireless Surface EMG Sensor Design Based on Analog-Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Mohammadreza Balouchestani

    2014-12-01

    Full Text Available Surface Electromyography (sEMG is a non-invasive measurement process that does not involve tools and instruments to break the skin or physically enter the body to investigate and evaluate the muscular activities produced by skeletal muscles. The main drawbacks of existing sEMG systems are: (1 they are not able to provide real-time monitoring; (2 they suffer from long processing time and low speed; (3 they are not effective for wireless healthcare systems because they consume huge power. In this work, we present an analog-based Compressed Sensing (CS architecture, which consists of three novel algorithms for design and implementation of wearable wireless sEMG bio-sensor. At the transmitter side, two new algorithms are presented in order to apply the analog-CS theory before Analog to Digital Converter (ADC. At the receiver side, a robust reconstruction algorithm based on a combination of ℓ1-ℓ1-optimization and Block Sparse Bayesian Learning (BSBL framework is presented to reconstruct the original bio-signals from the compressed bio-signals. The proposed architecture allows reducing the sampling rate to 25% of Nyquist Rate (NR. In addition, the proposed architecture reduces the power consumption to 40%, Percentage Residual Difference (PRD to 24%, Root Mean Squared Error (RMSE to 2%, and the computation time from 22 s to 9.01 s, which provide good background for establishing wearable wireless healthcare systems. The proposed architecture achieves robust performance in low Signal-to-Noise Ratio (SNR for the reconstruction process.

  10. USING H.264/AVC-INTRA FOR DCT BASED SEGMENTATION DRIVEN COMPOUND IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Ebenezer Juliet

    2011-08-01

    Full Text Available This paper presents a one pass block classification algorithm for efficient coding of compound images which consists of multimedia elements like text, graphics and natural images. The objective is to minimize the loss of visual quality of text during compression by separating text information which needs high special resolution than the pictures and background. It segments computer screen images into text/graphics and picture/background classes based on DCT energy in each 4x4 block, and then compresses both text/graphics pixels and picture/background blocks by H.264/AVC with variable quantization parameter. Experimental results show that the single H.264/AVC-INTRA coder with variable quantization outperforms single coders such as JPEG, JPEG-2000 for compound images. Also the proposed method improves the PSNR value significantly than standard JPEG, JPEG-2000 and while keeping competitive compression ratios.

  11. A high capacity text steganography scheme based on LZW compression and color coding

    Directory of Open Access Journals (Sweden)

    Aruna Malik

    2017-02-01

    Full Text Available In this paper, capacity and security issues of text steganography have been considered by employing LZW compression technique and color coding based approach. The proposed technique uses the forward mail platform to hide the secret data. This algorithm first compresses secret data and then hides the compressed secret data into the email addresses and also in the cover message of the email. The secret data bits are embedded in the message (or cover text by making it colored using a color coding table. Experimental results show that the proposed method not only produces a high embedding capacity but also reduces computational complexity. Moreover, the security of the proposed method is significantly improved by employing stego keys. The superiority of the proposed method has been experimentally verified by comparing with recently developed existing techniques.

  12. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  13. Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yang Jun

    2016-02-01

    Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.

  14. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  15. Quark enables semi-reference-based compression of RNA-seq data.

    Science.gov (United States)

    Sarkar, Hirak; Patro, Rob

    2017-11-01

    The past decade has seen an exponential increase in biological sequencing capacity, and there has been a simultaneous effort to help organize and archive some of the vast quantities of sequencing data that are being generated. Although these developments are tremendous from the perspective of maximizing the scientific utility of available data, they come with heavy costs. The storage and transmission of such vast amounts of sequencing data is expensive. We present Quark, a semi-reference-based compression tool designed for RNA-seq data. Quark makes use of a reference sequence when encoding reads, but produces a representation that can be decoded independently, without the need for a reference. This allows Quark to achieve markedly better compression rates than existing reference-free schemes, while still relieving the burden of assuming a specific, shared reference sequence between the encoder and decoder. We demonstrate that Quark achieves state-of-the-art compression rates, and that, typically, only a small fraction of the reference sequence must be encoded along with the reads to allow reference-free decompression. Quark is implemented in C ++11, and is available under a GPLv3 license at www.github.com/COMBINE-lab/quark. rob.patro@cs.stonybrook.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  16. Experimental Research on Internal Behaviors of Caved Rocks under the Uniaxial Confined Compression

    Directory of Open Access Journals (Sweden)

    Yu-jiang Zhang

    2017-01-01

    Full Text Available As main composition of longwall gob, caved rocks’ behaviors and their impacts under compression crucially influence strata control, subsidence, associated resources extraction, and many other aspects. However, current researches are based on a whole sample, due to looseness of caved rocks and limitation of observation technology. In this paper, an experiment system was built to investigate internal behaviors of caved rocks’ sample, under the uniaxial confined compression, including movement and breakage behavior by the digital image processing technologies. The results show that the compression process of caved rocks could be divided into two stages by relative density. Boundary effect and changes of voids and contact pressure among caved rocks lead to different movement law in different position in sample’s interior. A stratification phenomenon of breakage was discovered, which presents breakage concentration in the middle of the sample. The nonlinear movement and shear dislocation induced by shifts among caved rocks are the reason of the breakage stratification phenomenon. This phenomenon would have an effect on the permeability and seepage research of similar medium.

  17. Lossless compression of multispectral images using spectral information

    Science.gov (United States)

    Ma, Long; Shi, Zelin; Tang, Xusheng

    2009-10-01

    Multispectral images are available for different purposes due to developments in spectral imaging systems. The sizes of multispectral images are enormous. Thus transmission and storage of these volumes of data require huge time and memory resources. That is why compression algorithms must be developed. A salient property of multispectral images is that strong spectral correlation exists throughout almost all bands. This fact is successfully used to predict each band based on the previous bands. We propose to use spectral linear prediction and entropy coding with context modeling for encoding multispectral images. Linear prediction predicts the value for the next sample and computes the difference between predicted value and the original value. This difference is usually small, so it can be encoded with less its than the original value. The technique implies prediction of each image band by involving number of bands along the image spectra. Each pixel is predicted using information provided by pixels in the previous bands in the same spatial position. As done in the JPEG-LS, the proposed coder also represents the mapped residuals by using an adaptive Golomb-Rice code with context modeling. This residual coding is context adaptive, where the context used for the current sample is identified by a context quantization function of the three gradients. Then, context-dependent Golomb-Rice code and bias parameters are estimated sample by sample. The proposed scheme was compared with three algorithms applied to the lossless compression of multispectral images, namely JPEG-LS, Rice coding, and JPEG2000. Simulation tests performed on AVIRIS images have demonstrated that the proposed compression scheme is suitable for multispectral images.

  18. Fast electron microscopy via compressive sensing

    Science.gov (United States)

    Larson, Kurt W; Anderson, Hyrum S; Wheeler, Jason W

    2014-12-09

    Various technologies described herein pertain to compressive sensing electron microscopy. A compressive sensing electron microscope includes a multi-beam generator and a detector. The multi-beam generator emits a sequence of electron patterns over time. Each of the electron patterns can include a plurality of electron beams, where the plurality of electron beams is configured to impart a spatially varying electron density on a sample. Further, the spatially varying electron density varies between each of the electron patterns in the sequence. Moreover, the detector collects signals respectively corresponding to interactions between the sample and each of the electron patterns in the sequence.

  19. CoGI: Towards Compressing Genomes as an Image.

    Science.gov (United States)

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  20. Pressure-Induced Changes in Inter-Diffusivity and Compressive Stress in Chemically Strengthened Glass

    DEFF Research Database (Denmark)

    Svenson, Mouritz Nolsøe; Thirion, Lynn M.; Youngman, Randall E.

    chamber to compress bulk glass samples isostatically up to 1 GPa at elevated temperature before or after the ion exchange treatment of an industrial sodium-magnesium aluminosilicate glass. Compression of the samples prior to ion exchange leads to a decreased Na+-K+ inter-diffusivity, increased compressive...

  1. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    Science.gov (United States)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  2. Secure biometric image sensor and authentication scheme based on compressed sensing.

    Science.gov (United States)

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  3. DETERMINING OPTIMAL CUBE FOR 3D-DCT BASED VIDEO COMPRESSION FOR DIFFERENT MOTION LEVELS

    Directory of Open Access Journals (Sweden)

    J. Augustin Jacob

    2012-11-01

    Full Text Available This paper proposes new three dimensional discrete cosine transform (3D-DCT based video compression algorithm that will select the optimal cube size based on the motion content of the video sequence. It is determined by finding normalized pixel difference (NPD values, and by categorizing the cubes as “low” or “high” motion cube suitable cube size of dimension either [16×16×8] or[8×8×8] is chosen instead of fixed cube algorithm. To evaluate the performance of the proposed algorithm test sequence with different motion levels are chosen. By doing rate vs. distortion analysis the level of compression that can be achieved and the quality of reconstructed video sequence are determined and compared against fixed cube size algorithm. Peak signal to noise ratio (PSNR is taken to measure the video quality. Experimental result shows that varying the cube size with reference to the motion content of video frames gives better performance in terms of compression ratio and video quality.

  4. Uniaxial compression test series on Bullfrog Tuff

    International Nuclear Information System (INIS)

    Price, R.H.; Jones, A.K.; Nimick, K.G.

    1982-04-01

    Nineteen uniaxial compressive experiments were performed on samples of the Bullfrog Member of the Crater Flat Tuff, obtained from drillhole USW-G1 at Yucca Mountain on the Nevada Test Site. The water saturated samples were deformed at a nominal strain rate of 10 -5 sec -1 , atmospheric pressure and room temperature. Resultant unconfined compressive strengths, axial strains to failure, Young's moduli and Poisson's ratios ranged from 4.63 to 153. MPa, .0028 to .0058, 2.03 to 28.9 GPa and .08 to .16, respectively

  5. Compressive Strength of EN AC-44200 Based Composite Materials Strengthened with α-Al2O3 Particles

    Directory of Open Access Journals (Sweden)

    Kurzawa A.

    2017-06-01

    Full Text Available The paper presents results of compressive strength investigations of EN AC-44200 based aluminum alloy composite materials reinforced with aluminum oxide particles at ambient and at temperatures of 100, 200 and 250°C. They were manufactured by squeeze casting of the porous preforms made of α-Al2O3 particles with liquid aluminum alloy EN AC-44200. The composite materials were reinforced with preforms characterized by the porosities of 90, 80, 70 and 60 vol. %, thus the alumina content in the composite materials was 10, 20, 30 and 40 vol.%. The results of the compressive strength of manufactured materials were presented and basing on the microscopic observations the effect of the volume content of strengthening alumina particles on the cracking mechanisms during compression at indicated temperatures were shown and discussed. The highest compressive strength of 470 MPa at ambient temperature showed composite materials strengthened with 40 vol.% of α-Al2O3 particles.

  6. Least median of squares filtering of locally optimal point matches for compressible flow image registration

    International Nuclear Information System (INIS)

    Castillo, Edward; Guerrero, Thomas; Castillo, Richard; White, Benjamin; Rojo, Javier

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. (paper)

  7. Bulk characterization of pharmaceutical powders by low-pressure compression II

    DEFF Research Database (Denmark)

    Hagsten Sørensen, A.; Sonnergaard, Jørn; Hovgaard, L.

    2006-01-01

    The aim of the present study was to investigate the effect of punch and die diameter, sample size, compression speed, and particle size on two low-pressure compression-derived parameters; the compressed density and the Walker w parameter. The excellent repeatability of the low-pressure compressio...

  8. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    DEFF Research Database (Denmark)

    Oxvig, Christian Schou; Pedersen, Patrick Steffen; Arildsen, Thomas

    2014-01-01

    Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM) imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also pr...... as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research....

  9. Photon level chemical classification using digital compressive detection

    International Nuclear Information System (INIS)

    Wilcox, David S.; Buzzard, Gregery T.; Lucier, Bradley J.; Wang Ping; Ben-Amotz, Dor

    2012-01-01

    Highlights: ► A new digital compressive detection strategy is developed. ► Chemical classification demonstrated using as few as ∼10 photons. ► Binary filters are optimal when taking few measurements. - Abstract: A key bottleneck to high-speed chemical analysis, including hyperspectral imaging and monitoring of dynamic chemical processes, is the time required to collect and analyze hyperspectral data. Here we describe, both theoretically and experimentally, a means of greatly speeding up the collection of such data using a new digital compressive detection strategy. Our results demonstrate that detecting as few as ∼10 Raman scattered photons (in as little time as ∼30 μs) can be sufficient to positively distinguish chemical species. This is achieved by measuring the Raman scattered light intensity transmitted through programmable binary optical filters designed to minimize the error in the chemical classification (or concentration) variables of interest. The theoretical results are implemented and validated using a digital compressive detection instrument that incorporates a 785 nm diode excitation laser, digital micromirror spatial light modulator, and photon counting photodiode detector. Samples consisting of pairs of liquids with different degrees of spectral overlap (including benzene/acetone and n-heptane/n-octane) are used to illustrate how the accuracy of the present digital compressive detection method depends on the correlation coefficients of the corresponding spectra. Comparisons of measured and predicted chemical classification score plots, as well as linear and non-linear discriminant analyses, demonstrate that this digital compressive detection strategy is Poisson photon noise limited and outperforms total least squares-based compressive detection with analog filters.

  10. Gleeble Testing of Tungsten Samples

    Science.gov (United States)

    2013-02-01

    temperature on an Instron load frame with a 222.41 kN (50 kip) load cell . The samples were compressed at the same strain rate as on the Gleeble...ID % RE Initial Density (cm 3 ) Density after Compression (cm 3 ) % Change in Density Test Temperature NT1 0 18.08 18.27 1.06 1000 NT3 0...4.1 Nano-Tungsten The results for the compression of the nano-tungsten samples are shown in tables 2 and 3 and figure 5. During testing, sample NT1

  11. The application of sparse linear prediction dictionary to compressive sensing in speech signals

    Directory of Open Access Journals (Sweden)

    YOU Hanxu

    2016-04-01

    Full Text Available Appling compressive sensing (CS,which theoretically guarantees that signal sampling and signal compression can be achieved simultaneously,into audio and speech signal processing is one of the most popular research topics in recent years.In this paper,K-SVD algorithm was employed to learn a sparse linear prediction dictionary regarding as the sparse basis of underlying speech signals.Compressed signals was obtained by applying random Gaussian matrix to sample original speech frames.Orthogonal matching pursuit (OMP and compressive sampling matching pursuit (CoSaMP were adopted to recovery original signals from compressed one.Numbers of experiments were carried out to investigate the impact of speech frames length,compression ratios,sparse basis and reconstruction algorithms on CS performance.Results show that sparse linear prediction dictionary can advance the performance of speech signals reconstruction compared with discrete cosine transform (DCT matrix.

  12. Effects of Direct Fuel Injection Strategies on Cycle-by-Cycle Variability in a Gasoline Homogeneous Charge Compression Ignition Engine: Sample Entropy Analysis

    Directory of Open Access Journals (Sweden)

    Jacek Hunicz

    2015-01-01

    Full Text Available In this study we summarize and analyze experimental observations of cyclic variability in homogeneous charge compression ignition (HCCI combustion in a single-cylinder gasoline engine. The engine was configured with negative valve overlap (NVO to trap residual gases from prior cycles and thus enable auto-ignition in successive cycles. Correlations were developed between different fuel injection strategies and cycle average combustion and work output profiles. Hypothesized physical mechanisms based on these correlations were then compared with trends in cycle-by-cycle predictability as revealed by sample entropy. The results of these comparisons help to clarify how fuel injection strategy can interact with prior cycle effects to affect combustion stability and so contribute to design control methods for HCCI engines.

  13. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  14. A new DWT/MC/DPCM video compression framework based on EBCOT

    Science.gov (United States)

    Mei, L. M.; Wu, H. R.; Tan, D. M.

    2005-07-01

    A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.

  15. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  16. A method of vehicle license plate recognition based on PCANet and compressive sensing

    Science.gov (United States)

    Ye, Xianyi; Min, Feng

    2018-03-01

    The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.

  17. Accelerated Compressed Sensing Based CT Image Reconstruction.

    Science.gov (United States)

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  18. Accelerated Compressed Sensing Based CT Image Reconstruction

    Directory of Open Access Journals (Sweden)

    SayedMasoud Hashemi

    2015-01-01

    Full Text Available In X-ray computed tomography (CT an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  19. A review on compressed pattern matching

    Directory of Open Access Journals (Sweden)

    Surya Prakash Mishra

    2016-09-01

    Full Text Available Compressed pattern matching (CPM refers to the task of locating all the occurrences of a pattern (or set of patterns inside the body of compressed text. In this type of matching, pattern may or may not be compressed. CPM is very useful in handling large volume of data especially over the network. It has many applications in computational biology, where it is useful in finding similar trends in DNA sequences; intrusion detection over the networks, big data analytics etc. Various solutions have been provided by researchers where pattern is matched directly over the uncompressed text. Such solution requires lot of space and consumes lot of time when handling the big data. Various researchers have proposed the efficient solutions for compression but very few exist for pattern matching over the compressed text. Considering the future trend where data size is increasing exponentially day-by-day, CPM has become a desirable task. This paper presents a critical review on the recent techniques on the compressed pattern matching. The covered techniques includes: Word based Huffman codes, Word Based Tagged Codes; Wavelet Tree Based Indexing. We have presented a comparative analysis of all the techniques mentioned above and highlighted their advantages and disadvantages.

  20. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  1. Compressed sensing techniques for receiver based post-compensation of transmitter's nonlinear distortions in OFDM systems

    KAUST Repository

    Owodunni, Damilola S.

    2014-04-01

    In this paper, compressed sensing techniques are proposed to linearize commercial power amplifiers driven by orthogonal frequency division multiplexing signals. The nonlinear distortion is considered as a sparse phenomenon in the time-domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional compressed sensing approach, while the second incorporates a priori information about the distortions to enhance the estimation. Finally, the third technique involves an iterative data-aided algorithm that does not require any pilot carriers and hence allows the system to work at maximum bandwidth efficiency. The performances of all the proposed techniques are evaluated on a commercial power amplifier and compared. The error vector magnitude and symbol error rate results show the ability of compressed sensing to compensate for the amplifier\\'s nonlinear distortions. © 2013 Elsevier B.V.

  2. Research of compression strength of fissured rock mass

    Directory of Open Access Journals (Sweden)

    А. Г. Протосеня

    2017-03-01

    Full Text Available The article examines a method of forecasting strength properties and their scale effect in fissured rock mass using computational modelling with final elements method in ABAQUS software. It shows advantages of this approach for solving tasks of determining mechanical properties of fissured rock mass, main stages of creating computational geomechanic model of rock mass and conducting a numerical experiment. The article presents connections between deformation during loading of numerical model, inclination angle of main fracture system from uniaxial and biaxial compression strength value, size of the sample of fissured rock mass and biaxial compression strength value under conditions of apatite-nepheline rock deposit at Plateau Rasvumchorr OAO «Apatit» in Kirovsky region of Murmanskaya oblast. We have conducted computational modelling of rock mass blocks testing in discontinuities based on real experiment using non-linear shear strength criterion of Barton – Bandis and compared results of computational experiments with data from field studies and laboratory tests. The calculation results have a high-quality match to laboratory results when testing fissured rock mass samples.

  3. Reducing test-data volume and test-power simultaneously in LFSR reseeding-based compression environment

    Energy Technology Data Exchange (ETDEWEB)

    Wang Weizheng; Kuang Jishun; You Zhiqiang; Liu Peng, E-mail: jshkuang@163.com [College of Information Science and Engineering, Hunan University, Changsha 410082 (China)

    2011-07-15

    This paper presents a new test scheme based on scan block encoding in a linear feedback shift register (LFSR) reseeding-based compression environment. Meanwhile, our paper also introduces a novel algorithm of scan-block clustering. The main contribution of this paper is a flexible test-application framework that achieves significant reductions in switching activity during scan shift and the number of specified bits that need to be generated via LFSR reseeding. Thus, it can significantly reduce the test power and test data volume. Experimental results using Mintest test set on the larger ISCAS'89 benchmarks show that the proposed method reduces the switching activity significantly by 72%-94% and provides a best possible test compression of 74%-94% with little hardware overhead. (semiconductor integrated circuits)

  4. Resource efficient data compression algorithms for demanding, WSN based biomedical applications.

    Science.gov (United States)

    Antonopoulos, Christos P; Voros, Nikolaos S

    2016-02-01

    During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the

  5. Influence of crystal habit on the compression and densification mechanism of ibuprofen

    Science.gov (United States)

    Di Martino, Piera; Beccerica, Moira; Joiris, Etienne; Palmieri, Giovanni F.; Gayot, Anne; Martelli, Sante

    2002-08-01

    Ibuprofen was recrystallized from several solvents by two different methods: addition of a non-solvent to a drug solution and cooling of a drug solution. Four samples, characterized by different crystal habit, were selected: sample A, sample E and sample T, recrystallized respectively from acetone, ethanol and THF by addition of water as non-solvent and sample M recrystallized from methanol by temperature decrease. By SEM analysis, sample were characterized with the respect of their crystal habit, mean particle diameter and elongation ratio. Sample A appears stick-shaped, sample E acicular with lamellar characteristics, samples T and M polyhedral. DSC and X-ray diffraction studies permit to exclude a polymorphic modification of ibuprofen during crystallization. For all samples micromeritics properties, densification behaviour and compression ability was analysed. Sample M shows a higher densification tendency, evidenciated by its higher apparent and tapped particle density. The ability to densificate is also pointed out by D0' value of Heckel's plot, which indicate the rearrangement of original particles at the initial stage of compression. This fact is related to the crystal habit of sample M, which is characterized by strongly smoothed coins. The increase in powder bed porosity permits a particle-particle interaction of greater extent during the subsequent stage of compression, which allows higher tabletability and compressibility.

  6. Enhancing the compressive strength of landfill soil using cement and bagasse ash

    Science.gov (United States)

    Azim, M. A. M.; Azhar, A. T. S.; Tarmizi, A. K. A.; Shahidan, S.; Nabila, A. T. A.

    2017-11-01

    The stabilisation of contaminated soil with cement and agricultural waste is a widely applied method which contributes to the sustainability of the environment. Soil may be stabilised to increase strength and durability or to prevent erosion and other geotechnical failure. This study was carried out to evaluate the compressive strength of ex-landfill soil when cement and bagasse ash (BA) are added to it. Different proportions of cement (5%, 10%, 15% and 20%) was added to sample weights without BA. On the other hand, the cement in a different batch of sample weights was replaced by 2.5%, 5%, 7.5% and 10% of BA. All samples were allowed to harden and were cured at room temperature for 7, 14 and 28 days respectively. The strength of the contaminated soil was assessed using an unconfined compressive strength test (UCS). The laboratory tests also included the index properties of soil, cement and bagasse ash in raw form. The results indicated that the samples with cement achieved the highest compressive strength measuring 4.39 MPa. However, this study revealed that the use of bagasse ash produced low quality products with a reduction in strength. For example, when 5% of cement was replaced with 5% ash, the compressive strength decreased by about 54% from 0.72 MPa to 0.33 MPa. Similarly, the compressive strength of each sample after a curing period of 28 days was higher compared to samples cured for 7 and 14 days respectively. This is proved that a longer curing period is needed to increase the compressive strength of the samples.

  7. The effect of compression on clinical diagnosis of glaucoma based on non-analyzed confocal scanning laser ophthalmoscopy images

    NARCIS (Netherlands)

    Abramoff, M.D.

    2006-01-01

    Knowledge of the effect of compression of ophthalmic images on diagnostic reading is essential for effective tele-ophthalmology applications. It was therefore with great anticipation that I read the article “The Effect of Compression on Clinical Diagnosis of Glaucoma Based on Non-analyzed Confocal

  8. Rapid compression induced solidification of two amorphous phases of poly(ethylene terephthalate)

    Energy Technology Data Exchange (ETDEWEB)

    Hong, S M [Laboratory of High Pressure Physics, Southwest Jiaotong University, Chengdu, 610031 (China); Liu, X R [Laboratory of High Pressure Physics, Southwest Jiaotong University, Chengdu, 610031 (China); Su, L [Laboratory of High Pressure Physics, Southwest Jiaotong University, Chengdu, 610031 (China); Huang, D H [Laboratory of High Pressure Physics, Southwest Jiaotong University, Chengdu, 610031 (China); Li, L B [Foods Research Centre Unilever R and D, Vlaardingen Olivier van Noortlaan, 120, 3133 AT Vlaardingen (Netherlands)

    2006-08-21

    Melts of poly(ethylene terephthalate) were solidified by rapid compression to 2 GPa within 20 ms and by a series of comparative processes including natural cooling, slow compressing and rapid cooling, respectively. By combining XRD and differential scanning calorimetry data of the recovered samples, it is made clear that rapid compression induces two kinds of amorphous phases. One is relatively stable and can also be formed in the slow compression and the cooling processes. Another is metastable and transforms to crystalline phase at 371 K. This metastable amorphous phase cannot be obtained by slow compression or natural cooling, and its crystallization temperature is remarkably different from that of the metastable amorphous phase formed in the rapid cooling sample.

  9. Compressive sensing of full wave field data for structural health monitoring applications

    DEFF Research Database (Denmark)

    di Ianni, Tommaso; De Marchi, Luca; Perelli, Alessandro

    2015-01-01

    ; however, the acquisition process is generally time-consuming, posing a limit in the applicability of such approaches. To reduce the acquisition time, we use a random sampling scheme based on compressive sensing (CS) to minimize the number of points at which the field is measured. The CS reconstruction...

  10. Bulk characterization of pharmaceutical powders by low-pressure compression

    DEFF Research Database (Denmark)

    Sørensen, A.H.; Sonnergaard, Jørn; Hovgaard, L.

    2005-01-01

    Low-pressure compression of pharmaceutical powders using small amounts of sample (50 mg) was evaluated as an alternative to traditional bulk powder characterization by tapping volumetry. Material parameters were extrapolated directly from the compression data and by fitting with the Walker...

  11. Discrete Wigner Function Reconstruction and Compressed Sensing

    OpenAIRE

    Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin

    2011-01-01

    A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

  12. Toward compression of small cell population: harnessing stress in passive regions of dielectric elastomer actuators

    Science.gov (United States)

    Poulin, Alexandre; Rosset, Samuel; Shea, Herbert

    2014-03-01

    We present a dielectric elastomer actuator (DEA) for in vitro analysis of mm2 biological samples under periodic compressive stress. Understanding how mechanical stimuli affect cell functions could lead to significant advances in diseases diagnosis and drugs development. We previously reported an array of 72 micro-DEAs on a chip to apply a periodic stretch to cells. To diversify our cell mechanotransduction toolkit we have developed an actuator for periodic compression of small cell populations. The device is based on a novel design which exploits the effects of non-equibiaxial pre-stretch and takes advantage of the stress induced in passive regions of DEAs. The device consists of two active regions separated by a 2mm x 2mm passive area. When connected to an AC high-voltage source, the two active regions periodically compress the passive region. Due to the non-equibiaxial pre-stretch it induces uniaxial compressive strain greater than 10%. Cells adsorbed on top of this passive gap would experience the same uniaxial compressive stain. The electrodes configuration confines the electric field and prevents it from reaching the biological sample. A thin layer of silicone is casted on top of the device to ensure a biocompatible environment. This design provides several advantages over alternative technologies such as high optical transparency of the area of interest (passive region under compression) and its potential for miniaturization and parallelization.

  13. Effects of Leaching Behavior of Calcium Ions on Compression and Durability of Cement-Based Materials with Mineral Admixtures

    Science.gov (United States)

    Cheng, An; Chao, Sao-Jeng; Lin, Wei-Ting

    2013-01-01

    Leaching of calcium ions increases the porosity of cement-based materials, consequently resulting in a negative effect on durability since it provides an entry for aggressive harmful ions, causing reinforcing steel corrosion. This study investigates the effects of leaching behavior of calcium ions on the compression and durability of cement-based materials. Since the parameters influencing the leaching behavior of cement-based materials are unclear and diverse, this paper focuses on the influence of added mineral admixtures (fly ash, slag and silica fume) on the leaching behavior of calcium ions regarding compression and durability of cemented-based materials. Ammonium nitrate solution was used to accelerate the leaching process in this study. Scanning electron microscopy, X-ray diffraction analysis, and thermogravimetric analysis were employed to analyze and compare the cement-based material compositions prior to and after calcium ion leaching. The experimental results show that the mineral admixtures reduce calcium hydroxide quantity and refine pore structure through pozzolanic reaction, thus enhancing the compressive strength and durability of cement-based materials. PMID:28809247

  14. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks.

    Science.gov (United States)

    Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang

    2016-09-22

    To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  15. Direct current force sensing device based on compressive spring, permanent magnet, and coil-wound magnetostrictive/piezoelectric laminate.

    Science.gov (United States)

    Leung, Chung Ming; Or, Siu Wing; Ho, S L

    2013-12-01

    A force sensing device capable of sensing dc (or static) compressive forces is developed based on a NAS106N stainless steel compressive spring, a sintered NdFeB permanent magnet, and a coil-wound Tb(0.3)Dy(0.7)Fe(1.92)/Pb(Zr, Ti)O3 magnetostrictive∕piezoelectric laminate. The dc compressive force sensing in the device is evaluated theoretically and experimentally and is found to originate from a unique force-induced, position-dependent, current-driven dc magnetoelectric effect. The sensitivity of the device can be increased by increasing the spring constant of the compressive spring, the size of the permanent magnet, and/or the driving current for the coil-wound laminate. Devices of low-force (20 N) and high-force (200 N) types, showing high output voltages of 262 and 128 mV peak, respectively, are demonstrated at a low driving current of 100 mA peak by using different combinations of compressive spring and permanent magnet.

  16. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  17. Evaluation of a wavelet-based compression algorithm applied to the silicon drift detectors data of the ALICE experiment at CERN

    International Nuclear Information System (INIS)

    Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo

    2004-01-01

    This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain

  18. Electromechanical modeling and experimental analysis of a compression-based piezoelectric vibration energy harvester

    Directory of Open Access Journals (Sweden)

    X.Z. Jiang

    2014-07-01

    Full Text Available Over the past few decades, wireless sensor networks have been widely used in the field of structure health monitoring of civil, mechanical, and aerospace systems. Currently, most wireless sensor networks are battery-powered and it is costly and unsustainable for maintenance because of the requirement for frequent battery replacements. As an attempt to address such issue, this article theoretically and experimentally studies a compression-based piezoelectric energy harvester using a multilayer stack configuration, which is suitable for civil infrastructure system applications where large compressive loads occur, such as heavily vehicular loading acting on pavements. In this article, we firstly present analytical and numerical modeling of the piezoelectric multilayer stack under axial compressive loading, which is based on the linear theory of piezoelectricity. A two-degree-of-freedom electromechanical model, considering both the mechanical and electrical aspects of the proposed harvester, was developed to characterize the harvested electrical power under the external electrical load. Exact closed-form expressions of the electromechanical models have been derived to analyze the mechanical and electrical properties of the proposed harvester. The theoretical analyses are validated through several experiments for a test prototype under harmonic excitations. The test results exhibit very good agreement with the analytical analyses and numerical simulations for a range of resistive loads and input excitation levels.

  19. Task-oriented lossy compression of magnetic resonance images

    Science.gov (United States)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  20. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    Science.gov (United States)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  1. Symmetrical compression distance for arrhythmia discrimination in cloud-based big-data services.

    Science.gov (United States)

    Lillo-Castellano, J M; Mora-Jiménez, I; Santiago-Mozos, R; Chavarría-Asso, F; Cano-González, A; García-Alberola, A; Rojo-Álvarez, J L

    2015-07-01

    The current development of cloud computing is completely changing the paradigm of data knowledge extraction in huge databases. An example of this technology in the cardiac arrhythmia field is the SCOOP platform, a national-level scientific cloud-based big data service for implantable cardioverter defibrillators. In this scenario, we here propose a new methodology for automatic classification of intracardiac electrograms (EGMs) in a cloud computing system, designed for minimal signal preprocessing. A new compression-based similarity measure (CSM) is created for low computational burden, so-called weighted fast compression distance, which provides better performance when compared with other CSMs in the literature. Using simple machine learning techniques, a set of 6848 EGMs extracted from SCOOP platform were classified into seven cardiac arrhythmia classes and one noise class, reaching near to 90% accuracy when previous patient arrhythmia information was available and 63% otherwise, hence overcoming in all cases the classification provided by the majority class. Results show that this methodology can be used as a high-quality service of cloud computing, providing support to physicians for improving the knowledge on patient diagnosis.

  2. The stability of clay using mount Sinabung ash with unconfined compression test (uct) value

    Science.gov (United States)

    Puji Hastuty, Ika; Roesyanto; Hutauruk, Ronny; Simanjuntak, Oberlyn

    2018-03-01

    The soil has a important role as a highway’s embankment material (sub grade). Soil conditions are very different in each location because the scientifically soil is a very complex and varied material and the located on the field is very loose or very soft, so it is not suitable for construction, then the soil should be stabilized. The additive material commonly used for soil stabilization includes cement, lime, fly ash, rice husk ash, and others. This experiment is using the addition of volcanic ash. The purpose of this study was to determine the Index Properties and Compressive Strength maximum value with Unconfined Compression Test due to the addition of volcanic ash as a stabilizing agent along with optimum levels of the addition. The result showed that the original soil sample has Water Content of 14.52%; the Specific Weight of 2.64%; Liquid limit of 48.64% and Plasticity Index of 29.82%. Then, the Compressive Strength value is 1.40 kg/cm2. According to USCS classification, the soil samples categorized as the (CL) type while based on AASHTO classification, the soil samples are including as the type of A-7-6. After the soil is stabilized with a variety of volcanic ash, can be concluded that the maximum value occurs at mixture variation of 11% Volcanic Ash with Unconfined Compressive Strength value of 2.32 kg/cm2.

  3. Fusion of Thresholding Rules During Wavelet-Based Noisy Image Compression

    Directory of Open Access Journals (Sweden)

    Bekhtin Yury

    2016-01-01

    Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.

  4. Poor chest compression quality with mechanical compressions in simulated cardiopulmonary resuscitation: a randomized, cross-over manikin study.

    Science.gov (United States)

    Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob

    2011-10-01

    Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Compressed sensing with cyclic-S Hadamard matrix for terahertz imaging applications

    Science.gov (United States)

    Ermeydan, Esra Şengün; ćankaya, Ilyas

    2018-01-01

    Compressed Sensing (CS) with Cyclic-S Hadamard matrix is proposed for single pixel imaging applications in this study. In single pixel imaging scheme, N = r . c samples should be taken for r×c pixel image where . denotes multiplication. CS is a popular technique claiming that the sparse signals can be reconstructed with samples under Nyquist rate. Therefore to solve the slow data acquisition problem in Terahertz (THz) single pixel imaging, CS is a good candidate. However, changing mask for each measurement is a challenging problem since there is no commercial Spatial Light Modulators (SLM) for THz band yet, therefore circular masks are suggested so that for each measurement one or two column shifting will be enough to change the mask. The CS masks are designed using cyclic-S matrices based on Hadamard transform for 9 × 7 and 15 × 17 pixel images within the framework of this study. The %50 compressed images are reconstructed using total variation based TVAL3 algorithm. Matlab simulations demonstrates that cyclic-S matrices can be used for single pixel imaging based on CS. The circular masks have the advantage to reduce the mechanical SLMs to a single sliding strip, whereas the CS helps to reduce acquisition time and energy since it allows to reconstruct the image from fewer samples.

  6. Applications of wavelet-based compression to multidimensional earth science data

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  7. Applications of wavelet-based compression to multidimensional earth science data

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-02-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  8. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  9. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Donghao Wang

    2016-09-01

    Full Text Available To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  10. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  11. Crack initiation and fracture features of Fe–Co–B–Si–Nb bulk metallic glass during compression

    Directory of Open Access Journals (Sweden)

    S. Lesz

    2016-01-01

    Full Text Available The aim of the paper was investigation crack initiation and fracture features developed during compression of Fe-based bulk metallic glass (BMG. These Fe-based BMG has received great attention as a new class of structural material due to an excellent properties (e.g. high strength and high elasticity and low costs. However, the poor ductility and brittle fracture exhibited in BMGs limit their structural application. At room temperature, BMGs fails catastrophically without appreciable plastic deformation under tension and only very limited plastic deformation is observed under compression or bending. Hence a well understanding of the crack initiation and fracture morphology of Fe-based BMGs after compression is of much importance for designing high performance BMGs. The raw materials used in this experiment for the production of BMGs were pure Fe, Co, Nb metals and nonmetallic elements: Si, B. The Fe–Co–B–Si–Nb alloy was cast as rods with three different diameters. The structure of the investigated BMGs rod is amorphous. The measurement of mechanical properties (Young modulus - E, compressive stress - σc, elastic strain - ε, unitary elastic strain energy – Uu were made in compression test. Compression test indicates the rods of Fe-based alloy to exhibit high mechanical strength. The development of crack initiation and fracture morphology after compression of Fe-based BMG were examined with scanning electron microscope (SEM. Fracture morphology of rods has been different on the cross section. Two characteristic features of the compressive fracture morphologies of BMGs were observed. One is the smooth region. Another typical feature of the compressive fracture morphology of BMGs is the vein pattern. The veins on the compressive fracture surface have an obvious direction as result of initial displace of sample along shear bands. This direction follows the direction of the displacement of a material. The formation of veins on the

  12. The Milky Way: paediatric milk-based dispersible tablets prepared by direct compression - a proof-of-concept study.

    Science.gov (United States)

    Orubu, Samuel E F; Hobson, Nicholas J; Basit, Abdul W; Tuleu, Catherine

    2017-04-01

    Dispersible tablets are proposed by the World Health Organization as the preferred paediatric formulation. It was hypothesised that tablets made from a powdered milk-base that disperse in water to form suspensions resembling milk might be a useful platform to improve acceptability in children. Milk-based dispersible tablets containing various types of powdered milk and infant formulae were formulated. The influence of milk type and content on placebo tablet properties was investigated using a design-of-experiments approach. Responses measured included friability, crushing strength and disintegration time. Additionally, the influence of compression force on the tablet properties of a model formulation was studied by compaction simulation. Disintegration times increased as milk content increased. Compaction simulation studies showed that compression force influenced disintegration time. These results suggest that the milk content, rather than type, and compression force were the most important determinants of disintegration. Up to 30% milk could be incorporated to produce 200 mg 10-mm flat-faced placebo tablets by direct compression disintegrating within 3 min in 5-10 ml of water, which is a realistic administration volume in children. The platform could accommodate 30% of a model active pharmaceutical ingredient (caffeine citrate). © 2016 Royal Pharmaceutical Society.

  13. Insulation interlaminar shear strength testing with compression and irradiation

    International Nuclear Information System (INIS)

    McManamy, T.J.; Brasier, J.E.; Snook, P.

    1989-01-01

    The Compact Ignition Tokamak (CIT) project identified the need for research and development for the insulation to be used in the toroidal field coils. The requirements included tolerance to a combination of high compression and shear and a high radiation dose. Samples of laminate-type sheet material were obtained from commercial vendors. The materials included various combinations of epoxy, polyimide, E-glass, S-glass, and T-glass. The T-glass was in the form of a three-dimensional weave. The first tests were with 50 x 25 x 1 mm samples. These materials were loaded in compression and then to failure in shear. At 345-MPa compression, the interlaminar shear strength was generally in the range of 110 to 140 MPa for the different materials. A smaller sample configuration was developed for irradiation testing. The data before irradiation were similar to those for the larger samples but approximately 10% lower. Limited fatigue testing was also performed by cycling the shear load. No reduction in shear strength was found after 50,000 cycles at 90% of the failure stress. Because of space limitations, only three materials were chosen for irradiation: two polyimide systems and one epoxy system. All used boron-free glass. The small shear/compression samples and some flexure specimens were irradiated to 4 x 10 9 and 2 x 10 10 rad in the Advanced Technology Reactor at Idaho National Engineering Laboratory. A lead shield was used to ensure that the majority of the dose was from neutrons. The shear strength with compression before and after irradiation at the lower dose was determined. Flexure strength and the results from irradiation at the higher dose level will be available in the near future. 7 refs., 7 figs., 2 tabs

  14. STRAIN LOCALIZATION PECULIARITIES AND DISTRIBUTION OF ACOUSTIC EMISSION SOURCES IN ROCK SAMPLES TESTED BY UNIAXIAL COMPRESSION AND EXPOSED TO ELECTRIC PULSES

    Directory of Open Access Journals (Sweden)

    V. A. Mubassarova

    2014-01-01

    Full Text Available Results of uniaxial compression tests of rock samples in electromagnetic fields are presented. The experiments were performed in the Laboratory of Basic Physics of Strength, Institute of Continuous Media Mechanics, Ural Branch of RAS (ICMM. Deformation of samples was studied, and acoustic emission (AE signals were recorded. During the tests, loads varied by stages. Specimens of granite from the Kainda deposit in Kyrgyzstan (similar to samples tested at the Research Station of RAS, hereafter RS RAS were subject to electric pulses at specified levels of compression load. The electric pulses supply was galvanic; two graphite electrodes were fixed at opposite sides of each specimen. The multichannel Amsy-5 Vallen System was used to record AE signals in the six-channel mode, which provided for determination of spatial locations of AE sources. Strain of the specimens was studied with application of original methods of strain computation based on analyses of optical images of deformed specimen surfaces in LaVISION Strain Master System.Acoustic emission experiment data were interpreted on the basis of analyses of the AE activity in time, i.e. the number of AE events per second, and analyses of signals’ energy and AE sources’ locations, i.e. defects.The experiment was conducted at ICMM with the use of the set of equipment with advanced diagnostic capabilities (as compared to earlier experiments described in [Zakupin et al., 2006a, 2006b; Bogomolov et al., 2004]. It can provide new information on properties of acoustic emission and deformation responses of loaded rock specimens to external electric pulses.The research task also included verification of reproducibility of the effect (AE activity when fracturing rates responded to electrical pulses, which was revealed earlier in studies conducted at RS RAS. In terms of the principle of randomization, such verification is methodologically significant as new effects, i.e. physical laws, can be considered

  15. Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks.

    Science.gov (United States)

    Arunraja, Muruganantham; Malathi, Veluchamy; Sakthivel, Erulappan

    2015-11-01

    Wireless sensor networks are engaged in various data gathering applications. The major bottleneck in wireless data gathering systems is the finite energy of sensor nodes. By conserving the on board energy, the life span of wireless sensor network can be well extended. Data communication being the dominant energy consuming activity of wireless sensor network, data reduction can serve better in conserving the nodal energy. Spatial and temporal correlation among the sensor data is exploited to reduce the data communications. Data similar cluster formation is an effective way to exploit spatial correlation among the neighboring sensors. By sending only a subset of data and estimate the rest using this subset is the contemporary way of exploiting temporal correlation. In Distributed Similarity based Clustering and Compressed Forwarding for wireless sensor networks, we construct data similar iso-clusters with minimal communication overhead. The intra-cluster communication is reduced using adaptive-normalized least mean squares based dual prediction framework. The cluster head reduces the inter-cluster data payload using a lossless compressive forwarding technique. The proposed work achieves significant data reduction in both the intra-cluster and the inter-cluster communications, with the optimal data accuracy of collected data. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    Science.gov (United States)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  17. P. W. Bridgman's contributions to the foundations of shock compression of condensed matter

    Energy Technology Data Exchange (ETDEWEB)

    Nellis, W J, E-mail: nellis@physics.harvard.ed [Department of Physics, Harvard University, Cambridge MA 02138 (United States)

    2010-03-01

    Based on his 50-year career in static high-pressure research, P. W. Bridgman (PWB) is the father of modern high-pressure physics. What is not generally recognized is that Bridgman was also intimately connected with establishing shock compression as a scientific tool and he predicted major events in shock research that occurred up to 40 years after his death. In 1956 the first phase transition under shock compression was reported in Fe at 13 GPa (130 kbar). PWB said a phase transition could not occur in a {approx}microsec, thus setting off a controversy. The scientific legitimacy of shock compression resulted 5 years later when static high-pressure researchers confirmed with x-ray diffraction the existence of epsilon-Fe. Once PWB accepted the fact that shock waves generated with chemical explosives were a valid scientific tool, he immediately realized that substantially higher pressures would be achieved with nuclear explosives. He included his ideas for achieving higher pressures in articles published a few years after his death. L. V. Altshuler eventually read Bridgman's articles and pursued the idea of using nuclear explosives to generate super high pressures, which subsequently morphed today into giant lasers. PWB also anticipated combining static and shock methods, which today is done with pre-compression of a soft sample in a diamond anvil cell followed by laser-driven shock compression. One variation of that method is the reverberating-shock technique, in which the first shock pre-compresses a soft sample and subsequent reverberations isentropically compress the first-shocked state.

  18. A compressed sensing based method with support refinement for impulse noise cancelation in DSL

    KAUST Repository

    Quadeer, Ahmed Abdul

    2013-06-01

    This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.

  19. On Compressed Sensing and the Estimation of Continuous Parameters From Noisy Observations

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2012-01-01

    Compressed sensing (CS) has in recent years become a very popular way of sampling sparse signals. This sparsity is measured with respect to some known dictionary consisting of a finite number of atoms. Most models for real world signals, however, are parametrised by continuous parameters correspo......Compressed sensing (CS) has in recent years become a very popular way of sampling sparse signals. This sparsity is measured with respect to some known dictionary consisting of a finite number of atoms. Most models for real world signals, however, are parametrised by continuous parameters...... corresponding to a dictionary with an infinite number of atoms. Examples of such parameters are the temporal and spatial frequency. In this paper, we analyse how CS affects the estimation performance of any unbiased estimator when we assume such infinite dictionaries. We base our analysis on the Cramer...

  20. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  1. About a method for compressing x-ray computed microtomography data

    Science.gov (United States)

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  2. Quantization Distortion in Block Transform-Compressed Data

    Science.gov (United States)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  3. Reliability-Based Approach for the Determination of the Required Compressive Strength of Concrete in Mix Design

    OpenAIRE

    Okasha , Nader M

    2017-01-01

    International audience; Concrete is recognized as the second most consumed product in our modern life after water. The variability in concrete properties is inevitable. The concrete mix is designed for a compressive strength that is different from, typically higher than, the value specified by the structural designer. Ways to calculate the compressive strength to be used in the mix design are provided in building and structural codes. These ways are all based on criteria related purely and on...

  4. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  5. Symmetric and asymmetric hybrid cryptosystem based on compressive sensing and computer generated holography

    Science.gov (United States)

    Ma, Lihong; Jin, Weimin

    2018-01-01

    A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.

  6. Low-latency video transmission over high-speed WPANs based on low-power video compression

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Ann

    2010-01-01

    This paper presents latency-constrained video transmission over high-speed wireless personal area networks (WPANs). Low-power video compression is proposed as an alternative to uncompressed video transmission. A video source rate control based on MINMAX quality criteria is introduced. Practical...

  7. Fast lossless compression via cascading Bloom filters.

    Science.gov (United States)

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  8. Multi-scale simulations of field ion microscopy images—Image compression with and without the tip shank

    International Nuclear Information System (INIS)

    NiewieczerzaŁ, Daniel; Oleksy, CzesŁaw; Szczepkowicz, Andrzej

    2012-01-01

    Multi-scale simulations of field ion microscopy images of faceted and hemispherical samples are performed using a 3D model. It is shown that faceted crystals have compressed images even in cases with no shank. The presence of the shank increases the compression of images of faceted crystals quantitatively in the same way as for hemispherical samples. It is hereby proven that the shank does not influence significantly the local, relative variations of the magnification caused by the atomic-scale structure of the sample. -- Highlights: ► Multi-scale simulations of field ion microscopy images. ► Faceted and hemispherical samples with and without shank. ► Shank causes overall compression, but does not influence local magnification effects. ► Image compression linearly increases with the shank angle. ► Shank changes compression of image of faceted tip in the same way as for smooth sample.

  9. Fault Diagnosis for Hydraulic Servo System Using Compressed Random Subspace Based ReliefF

    Directory of Open Access Journals (Sweden)

    Yu Ding

    2018-01-01

    Full Text Available Playing an important role in electromechanical systems, hydraulic servo system is crucial to mechanical systems like engineering machinery, metallurgical machinery, ships, and other equipment. Fault diagnosis based on monitoring and sensory signals plays an important role in avoiding catastrophic accidents and enormous economic losses. This study presents a fault diagnosis scheme for hydraulic servo system using compressed random subspace based ReliefF (CRSR method. From the point of view of feature selection, the scheme utilizes CRSR method to determine the most stable feature combination that contains the most adequate information simultaneously. Based on the feature selection structure of ReliefF, CRSR employs feature integration rules in the compressed domain. Meanwhile, CRSR substitutes information entropy and fuzzy membership for traditional distance measurement index. The proposed CRSR method is able to enhance the robustness of the feature information against interference while selecting the feature combination with balanced information expressing ability. To demonstrate the effectiveness of the proposed CRSR method, a hydraulic servo system joint simulation model is constructed by HyPneu and Simulink, and three fault modes are injected to generate the validation data.

  10. Information theoretic bounds for compressed sensing in SAR imaging

    International Nuclear Information System (INIS)

    Jingxiong, Zhang; Ke, Yang; Jianzhong, Guo

    2014-01-01

    Compressed sensing (CS) is a new framework for sampling and reconstructing sparse signals from measurements significantly fewer than those prescribed by Nyquist rate in the Shannon sampling theorem. This new strategy, applied in various application areas including synthetic aperture radar (SAR), relies on two principles: sparsity, which is related to the signals of interest, and incoherence, which refers to the sensing modality. An important question in CS-based SAR system design concerns sampling rate necessary and sufficient for exact or approximate recovery of sparse signals. In the literature, bounds of measurements (or sampling rate) in CS have been proposed from the perspective of information theory. However, these information-theoretic bounds need to be reviewed and, if necessary, validated for CS-based SAR imaging, as there are various assumptions made in the derivations of lower and upper bounds on sub-Nyquist sampling rates, which may not hold true in CS-based SAR imaging. In this paper, information-theoretic bounds of sampling rate will be analyzed. For this, the SAR measurement system is modeled as an information channel, with channel capacity and rate-distortion characteristics evaluated to enable the determination of sampling rates required for recovery of sparse scenes. Experiments based on simulated data will be undertaken to test the theoretic bounds against empirical results about sampling rates required to achieve certain detection error probabilities

  11. MAP-MRF-Based Super-Resolution Reconstruction Approach for Coded Aperture Compressive Temporal Imaging

    Directory of Open Access Journals (Sweden)

    Tinghua Zhang

    2018-02-01

    Full Text Available Coded Aperture Compressive Temporal Imaging (CACTI can afford low-cost temporal super-resolution (SR, but limits are imposed by noise and compression ratio on reconstruction quality. To utilize inter-frame redundant information from multiple observations and sparsity in multi-transform domains, a robust reconstruction approach based on maximum a posteriori probability and Markov random field (MAP-MRF model for CACTI is proposed. The proposed approach adopts a weighted 3D neighbor system (WNS and the coordinate descent method to perform joint estimation of model parameters, to achieve the robust super-resolution reconstruction. The proposed multi-reconstruction algorithm considers both total variation (TV and ℓ 2 , 1 norm in wavelet domain to address the minimization problem for compressive sensing, and solves it using an accelerated generalized alternating projection algorithm. The weighting coefficient for different regularizations and frames is resolved by the motion characteristics of pixels. The proposed approach can provide high visual quality in the foreground and background of a scene simultaneously and enhance the fidelity of the reconstruction results. Simulation results have verified the efficacy of our new optimization framework and the proposed reconstruction approach.

  12. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  13. Influence of compressive stress in TGO layer on impedance spectroscopy from TBC coatings

    Energy Technology Data Exchange (ETDEWEB)

    Kang, To; Zhang, Jianhai; Yuan, Maodan; Song, Sungjin; Kim, Hakjoon; Kim, Yongseok; Seok, Changsung [Sungkyunkwan Univ., Suwon (Korea, Republic of)

    2013-02-15

    Impedance spectroscopy is a non destructive evaluation (NDE) method first proposed and developed for evaluating TGO layers with compressive stress inside thermally degraded plasma sprayed thermal barrier coatings (PS TBCs). A bode plot (phase angle ({Dirac_h}) vs. frequency (f)) was used to investigate the TGO layer on electrical responses. In our experimental study, the phase angle of Bode plots is sensitive for detecting TGO layers while applying compressive stress on thermal barrier coatings. It is difficult to detect TGO layers in samples isothermally aged for 100hrs and 200hrs without compressive stress, and substantial change of phase was observed these samples with compressive stress. Also, the frequency shift of the phase angle and change of the phase angle are observed in samples isothermally aged for more than 400hrs.

  14. A compressibility based model for predicting the tensile strength of directly compressed pharmaceutical powder mixtures.

    Science.gov (United States)

    Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J

    2017-10-05

    A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Efficient Bayesian Compressed Sensing-based Channel Estimation Techniques for Massive MIMO-OFDM Systems

    OpenAIRE

    Al-Salihi, Hayder Qahtan Kshash; Nakhai, Mohammad Reza

    2017-01-01

    Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sp...

  16. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  17. Sparse representations and compressive sensing for imaging and vision

    CERN Document Server

    Patel, Vishal M

    2013-01-01

    Compressed sensing or compressive sensing is a new concept in signal processing where one measures a small number of non-adaptive linear combinations of the signal.  These measurements are usually much smaller than the number of samples that define the signal.  From these small numbers of measurements, the signal is then reconstructed by non-linear procedure.  Compressed sensing has recently emerged as a powerful tool for efficiently processing data in non-traditional ways.  In this book, we highlight some of the key mathematical insights underlying sparse representation and compressed sensing and illustrate the role of these theories in classical vision, imaging and biometrics problems.

  18. Compression of FASTQ and SAM format sequencing data.

    Directory of Open Access Journals (Sweden)

    James K Bonfield

    Full Text Available Storage and transmission of the data produced by modern DNA sequencing instruments has become a major concern, which prompted the Pistoia Alliance to pose the SequenceSqueeze contest for compression of FASTQ files. We present several compression entries from the competition, Fastqz and Samcomp/Fqzcomp, including the winning entry. These are compared against existing algorithms for both reference based compression (CRAM, Goby and non-reference based compression (DSRC, BAM and other recently published competition entries (Quip, SCALCE. The tools are shown to be the new Pareto frontier for FASTQ compression, offering state of the art ratios at affordable CPU costs. All programs are freely available on SourceForge. Fastqz: https://sourceforge.net/projects/fastqz/, fqzcomp: https://sourceforge.net/projects/fqzcomp/, and samcomp: https://sourceforge.net/projects/samcomp/.

  19. A test data compression scheme based on irrational numbers stored coding.

    Science.gov (United States)

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  20. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  1. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  2. Design Concepts of Polycarbonate-Based Intervertebral Lumbar Cages: Finite Element Analysis and Compression Testing

    Directory of Open Access Journals (Sweden)

    J. Obedt Figueroa-Cavazos

    2016-01-01

    Full Text Available This work explores the viability of 3D printed intervertebral lumbar cages based on biocompatible polycarbonate (PC-ISO® material. Several design concepts are proposed for the generation of patient-specific intervertebral lumbar cages. The 3D printed material achieved compressive yield strength of 55 MPa under a specific combination of manufacturing parameters. The literature recommends a reference load of 4,000 N for design of intervertebral lumbar cages. Under compression testing conditions, the proposed design concepts withstand between 7,500 and 10,000 N of load before showing yielding. Although some stress concentration regions were found during analysis, the overall viability of the proposed design concepts was validated.

  3. Inelastic response of silicon to shock compression.

    Science.gov (United States)

    Higginbotham, A; Stubley, P G; Comley, A J; Eggert, J H; Foster, J M; Kalantar, D H; McGonegle, D; Patel, S; Peacock, L J; Rothman, S D; Smith, R F; Suggit, M J; Wark, J S

    2016-04-13

    The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported 'anomalous' elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. This model is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature.

  4. Cyclic compressive creep-elastoplastic behaviors of in situ TiB_2/Al-reinforced composite

    International Nuclear Information System (INIS)

    Zhang, Qing; Zhang, Weizheng; Liu, Youyi; Guo, BingBin

    2016-01-01

    This paper presents a study on the cyclic compressive creep-elastoplastic behaviors of a TiB_2-reinforced aluminum matrix composite (ZL109) at 350 °C and 200 °C. According to the experimental results, under cyclic elastoplasticity and cyclic coupled compressive creep-elastoplasticity, the coupled creep will cause changes in isotropic stress and kinematic stress. Isotropic stress decreases with coupled creep, leading to cyclic softening. Positive kinematic stress, however, increases with coupled creep, leading to cyclic hardening. Transmission electron microscopy (TEM) observations of samples under cyclic compressive creep-elastoplasticity with different temperatures and strain amplitudes indicate that more coupled creep contributes to more subgrain boundaries but fewer intracrystalline dislocations. Based on the macro tests and micro observations, the micro mechanism of compressive creep's influence on cyclic elastoplasticity is elucidated. Dislocations recovering with coupled creep leads to isotropic softening, whereas subgrain structures created by coupled creep lead to kinematic hardening during cyclic deformation.

  5. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  6. Wireless EEG System Achieving High Throughput and Reduced Energy Consumption Through Lossless and Near-Lossless Compression.

    Science.gov (United States)

    Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo

    2018-02-01

    This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.

  7. Dynamic characterization and modeling of magneto-rheological elastomers under compressive loadings

    International Nuclear Information System (INIS)

    Koo, J H; Khan, F; Jang, D D; Jung, H J

    2009-01-01

    The primary goal of this paper is to characterize and model the compression properties of Magneto-Rheological Elastomers (MREs). MRE samples were fabricated by curing a two component elastomer resin with 30% content of 10 μm sized iron particles by volume. In order to vary the magnetic field during compressive testing, a test fixture was designed and fabricated in which two permanent magnets could be variably positioned on either side of the specimen. By changing the distance between the magnets, the fixture allowed for varying the magnetic field that passes uniformly through the sample. Using this test setup and a dynamic test frame, a series of compression tests of MRE samples was performed by varying the magnetic field and frequency of loading. The results show the MR effect (percent increase in the materials 'stiffness') increases as the magnetic field increases and loading frequency increases within the range of the magnetic field and input frequency considered in this study. Furthermore, a phenomenological model was developed to capture the dynamic behaviours of the MREs under compression loadings.

  8. Effective radiation attenuation calibration for breast density: compression thickness influences and correction

    Directory of Open Access Journals (Sweden)

    Thomas Jerry A

    2010-11-01

    Full Text Available Abstract Background Calibrating mammograms to produce a standardized breast density measurement for breast cancer risk analysis requires an accurate spatial measure of the compressed breast thickness. Thickness inaccuracies due to the nominal system readout value and compression paddle orientation induce unacceptable errors in the calibration. Method A thickness correction was developed and evaluated using a fully specified two-component surrogate breast model. A previously developed calibration approach based on effective radiation attenuation coefficient measurements was used in the analysis. Water and oil were used to construct phantoms to replicate the deformable properties of the breast. Phantoms consisting of measured proportions of water and oil were used to estimate calibration errors without correction, evaluate the thickness correction, and investigate the reproducibility of the various calibration representations under compression thickness variations. Results The average thickness uncertainty due to compression paddle warp was characterized to within 0.5 mm. The relative calibration error was reduced to 7% from 48-68% with the correction. The normalized effective radiation attenuation coefficient (planar representation was reproducible under intra-sample compression thickness variations compared with calibrated volume measures. Conclusion Incorporating this thickness correction into the rigid breast tissue equivalent calibration method should improve the calibration accuracy of mammograms for risk assessments using the reproducible planar calibration measure.

  9. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  10. Compression-Based Tools for Navigation with an Image Database

    Directory of Open Access Journals (Sweden)

    Giovanni Motta

    2012-01-01

    Full Text Available We present tools that can be used within a larger system referred to as a passive assistant. The system receives information from a mobile device, as well as information from an image database such as Google Street View, and employs image processing to provide useful information about a local urban environment to a user who is visually impaired. The first stage acquires and computes accurate location information, the second stage performs texture and color analysis of a scene, and the third stage provides specific object recognition and navigation information. These second and third stages rely on compression-based tools (dimensionality reduction, vector quantization, and coding that are enhanced by knowledge of (approximate location of objects.

  11. A Test Data Compression Scheme Based on Irrational Numbers Stored Coding

    Directory of Open Access Journals (Sweden)

    Hai-feng Wu

    2014-01-01

    Full Text Available Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS, is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  12. Conductivity Enhancement of Binder-Based Graphene Inks by Photonic Annealing and Subsequent Compression Rolling

    NARCIS (Netherlands)

    Arapov, K.; Bex, G.; Hendriks, R.; Rubingh, E.; Abbel, R.; de With, G.; Friedrich, H.

    2016-01-01

    This paper describes a combination of photonic annealing and compression rolling to improve the conductive properties of printed binder-based graphene inks. High-density light pulses result in temperatures up to 500 °C that along with a decrease of resistivity lead to layer expansion. The structural

  13. Bulk and microscale compressive behavior of a Zr-based metallic glass

    International Nuclear Information System (INIS)

    Lai, Y.H.; Lee, C.J.; Cheng, Y.T.; Chou, H.S.; Chen, H.M.; Du, X.H.; Chang, C.I.; Huang, J.C.; Jian, S.R.; Jang, J.S.C.; Nieh, T.G.

    2008-01-01

    Micropillars with diameters of 3.8, 1 and 0.7 μm were fabricated from a two-phase Zr-based metallic glass using focus ion beam (FIB), and then tested in compression at strain rates from 1 x 10 -4 to 1 x 10 -2 s -1 . The apparent yield strength of the micropillars ranges from 1992 to 2972 MPa, or 25-86% increase over that of the bulk specimens. This strength increase can be rationalized by the Weibull statistics for brittle materials

  14. Lossy compression of TPC data and trajectory tracking efficiency for the ALICE experiment

    CERN Document Server

    Nicolaucig, A; Mattavelli, M

    2003-01-01

    In this paper a quasi-lossless algorithm for the on-line compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN is described. The algorithm is based on a lossy source code modeling technique, i.e. it is based on a source model which is lossy if samples of the TPC signal are considered one by one; conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse, representing the pulse charge and the time localization of the pulse. So as to evaluate the consequences of the error introduced by the lossy compression process, the results of the trajectory tracking algorithms that process data off-line after the experiment are analyzed, in particular, versus their sensibility to the noise introduced by the compression. Two different versions of these off- line algorithms are described,...

  15. Use of a Real-Time Training Software (Laerdal QCPR®) Compared to Instructor-Based Feedback for High-Quality Chest Compressions Acquisition in Secondary School Students: A Randomized Trial.

    Science.gov (United States)

    Cortegiani, Andrea; Russotto, Vincenzo; Montalto, Francesca; Iozzo, Pasquale; Meschis, Roberta; Pugliesi, Marinella; Mariano, Dario; Benenati, Vincenzo; Raineri, Santi Maurizio; Gregoretti, Cesare; Giarratano, Antonino

    2017-01-01

    High-quality chest compressions are pivotal to improve survival from cardiac arrest. Basic life support training of school students is an international priority. The aim of this trial was to assess the effectiveness of a real-time training software (Laerdal QCPR®) compared to a standard instructor-based feedback for chest compressions acquisition in secondary school students. After an interactive frontal lesson about basic life support and high quality chest compressions, 144 students were randomized to two types of chest compressions training: 1) using Laerdal QCPR® (QCPR group- 72 students) for real-time feedback during chest compressions with the guide of an instructor who considered software data for students' correction 2) based on standard instructor-based feedback (SF group- 72 students). Both groups had a minimum of a 2-minute chest compressions training session. Students were required to reach a minimum technical skill level before the evaluation. We evaluated all students at 7 days from the training with a 2-minute chest compressions session. The primary outcome was the compression score, which is an overall measure of chest compressions quality calculated by the software expressed as percentage. 125 students were present at the evaluation session (60 from QCPR group and 65 from SF group). Students in QCPR group had a significantly higher compression score (median 90%, IQR 81.9-96.0) compared to SF group (median 67%, IQR 27.7-87.5), p = 0.0003. Students in QCPR group performed significantly higher percentage of fully released chest compressions (71% [IQR 24.5-99.0] vs 24% [IQR 2.5-88.2]; p = 0.005) and better chest compression rate (117.5/min [IQR 106-123.5] vs 125/min [115-135.2]; p = 0.001). In secondary school students, a training for chest compressions based on a real-time feedback software (Laerdal QCPR®) guided by an instructor is superior to instructor-based feedback training in terms of chest compression technical skill acquisition. Australian

  16. Methods of compression of digital holograms, based on 1-level wavelet transform

    International Nuclear Information System (INIS)

    Kurbatova, E A; Cheremkhin, P A; Evtikhiev, N N

    2016-01-01

    To reduce the size of memory required for storing information about 3D-scenes and to decrease the rate of hologram transmission, digital hologram compression can be used. Compression of digital holograms by wavelet transforms is among most powerful methods. In the paper the most popular wavelet transforms are considered and applied to the digital hologram compression. Obtained values of reconstruction quality and hologram's diffraction efficiencies are compared. (paper)

  17. Fast and low-dose computed laminography using compressive sensing based technique

    Science.gov (United States)

    Abbas, Sajid; Park, Miran; Cho, Seungryong

    2015-03-01

    Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.

  18. Fast and low-dose computed laminography using compressive sensing based technique

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Sajid, E-mail: scho@kaist.ac.kr; Park, Miran, E-mail: scho@kaist.ac.kr; Cho, Seungryong, E-mail: scho@kaist.ac.kr [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 305-701 (Korea, Republic of)

    2015-03-31

    Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.

  19. Fast and low-dose computed laminography using compressive sensing based technique

    International Nuclear Information System (INIS)

    Abbas, Sajid; Park, Miran; Cho, Seungryong

    2015-01-01

    Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT

  20. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  1. Effect of different dispersants in compressive strength of carbon fiber cementitious composites

    Science.gov (United States)

    Lestari, Yulinda; Bahri, Saiful; Sugiarti, Eni; Ramadhan, Gilang; Akbar, Ari Yustisia; Martides, Erie; Khaerudini, Deni S.

    2013-09-01

    Carbon Fiber Cementitious Composites (CFCC) is one of the most important materials in smart concrete applications. CFCC should be able to have the piezoresistivity properties where its resistivity changes when there is applied a stress/strain. It must also have the compressive strength qualification. One of the important additives in carbon fiber cementitious composites is dispersant. Dispersion of carbon fiber is one of the key problems in fabricating piezoresistive carbon fiber cementitious composites. In this research, the uses of dispersants are methylcellulose, mixture of defoamer and methylcellulose and superplasticizer based polycarboxylate. The preparation of composite samples is similar as in the mortar technique according to the ASTM C 109/109M standard. The additives material are PAN type carbon fibers, methylcellulose, defoamer and superplasticizer (as water reducer and dispersant). The experimental testing conducts the compressive strength and resistivity at various curing time, i.e. 3, 7 and 28 days. The results obtained that the highest compressive strength value in is for the mortar using superplasticizer based polycarboxylate dispersant. This also shown that the distribution of carbon fiber with superplasticizer is more effective, since not reacting with the cementitious material which was different from the methylcellulose that creates the cement hydration reaction. The research also found that the CFCC require the proper water cement ratio otherwise the compressive strength becomes lower.

  2. Influence of breast compression pressure on the performance of population-based mammography screening

    NARCIS (Netherlands)

    Holland, Katharina; Sechopoulos, Ioannis; Mann, Ritse M.; Den Heeten, Gerard J.; van Gils, Carla H.; Karssemeijer, Nico

    2017-01-01

    Background: In mammography, breast compression is applied to reduce the thickness of the breast. While it is widely accepted that firm breast compression is needed to ensure acceptable image quality, guidelines remain vague about how much compression should be applied during mammogram acquisition. A

  3. A Novel 1D Hybrid Chaotic Map-Based Image Compression and Encryption Using Compressed Sensing and Fibonacci-Lucas Transform

    Directory of Open Access Journals (Sweden)

    Tongfeng Zhang

    2016-01-01

    Full Text Available A one-dimensional (1D hybrid chaotic system is constructed by three different 1D chaotic maps in parallel-then-cascade fashion. The proposed chaotic map has larger key space and exhibits better uniform distribution property in some parametric range compared with existing 1D chaotic map. Meanwhile, with the combination of compressive sensing (CS and Fibonacci-Lucas transform (FLT, a novel image compression and encryption scheme is proposed with the advantages of the 1D hybrid chaotic map. The whole encryption procedure includes compression by compressed sensing (CS, scrambling with FLT, and diffusion after linear scaling. Bernoulli measurement matrix in CS is generated by the proposed 1D hybrid chaotic map due to its excellent uniform distribution. To enhance the security and complexity, transform kernel of FLT varies in each permutation round according to the generated chaotic sequences. Further, the key streams used in the diffusion process depend on the chaotic map as well as plain image, which could resist chosen plaintext attack (CPA. Experimental results and security analyses demonstrate the validity of our scheme in terms of high security and robustness against noise attack and cropping attack.

  4. Cyclops: single-pixel imaging lidar system based on compressive sensing

    Science.gov (United States)

    Magalhães, F.; Correia, M. V.; Farahi, F.; Pereira do Carmo, J.; Araújo, F. M.

    2017-11-01

    Mars and the Moon are envisaged as major destinations of future space exploration missions in the upcoming decades. Imaging LIDARs are seen as a key enabling technology in the support of autonomous guidance, navigation and control operations, as they can provide very accurate, wide range, high-resolution distance measurements as required for the exploration missions. Imaging LIDARs can be used at critical stages of these exploration missions, such as descent and selection of safe landing sites, rendezvous and docking manoeuvres, or robotic surface navigation and exploration. Despite these devices have been commercially available and used for long in diverse metrology and ranging applications, their size, mass and power consumption are still far from being suitable and attractive for space exploratory missions. Here, we describe a compact Single-Pixel Imaging LIDAR System that is based on a compressive sensing technique. The application of the compressive codes to a DMD array enables compression of the spatial information, while the collection of timing histograms correlated to the pulsed laser source ensures image reconstruction at the ranged distances. Single-pixel cameras have been compared with raster scanning and array based counterparts in terms of noise performance, and proved to be superior. Since a single photodetector is used, a better SNR and higher reliability is expected in contrast with systems using large format photodetector arrays. Furthermore, the event of failure of one or more micromirror elements in the DMD does not prevent full reconstruction of the images. This brings additional robustness to the proposed 3D imaging LIDAR. The prototype that was implemented has three modes of operation. Range Finder: outputs the average distance between the system and the area of the target under illumination; Attitude Meter: provides the slope of the target surface based on distance measurements in three areas of the target; 3D Imager: produces 3D ranged

  5. High-resolution quantization based on soliton self-frequency shift and spectral compression in a bi-directional comb-fiber architecture

    Science.gov (United States)

    Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong

    2018-03-01

    We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.

  6. A Simulation-based Randomized Controlled Study of Factors Influencing Chest Compression Depth

    Directory of Open Access Journals (Sweden)

    Kelsey P. Mayrand

    2015-12-01

    Full Text Available Introduction: Current resuscitation guidelines emphasize a systems approach with a strong emphasis on quality cardiopulmonary resuscitation (CPR. Despite the American Heart Association (AHA emphasis on quality CPR for over 10 years, resuscitation teams do not consistently meet recommended CPR standards. The objective is to assess the impact on chest compression depth of factors including bed height, step stool utilization, position of the rescuer’s arms and shoulders relative to the point of chest compression, and rescuer characteristics including height, weight, and gender. Methods: Fifty-six eligible subjects, including physician assistant students and first-year emergency medicine residents, were enrolled and randomized to intervention (bed lowered and step stool readily available and control (bed raised and step stool accessible, but concealed groups. We instructed all subjects to complete all interventions on a high-fidelity mannequin per AHA guidelines. Secondary end points included subject arm angle, height, weight group, and gender. Results: Using an intention to treat analysis, the mean compression depths for the intervention and control groups were not significantly different. Subjects positioning their arms at a 90-degree angle relative to the sagittal plane of the mannequin’s chest achieved a mean compression depth significantly greater than those compressing at an angle less than 90 degrees. There was a significant correlation between using a step stool and achieving the correct shoulder position. Subject height, weight group, and gender were all independently associated with compression depth. Conclusion: Rescuer arm position relative to the patient’s chest and step stool utilization during CPR are modifiable factors facilitating improved chest compression depth.

  7. Weak-strong clustering transition in renewing compressible flows

    OpenAIRE

    Dhanagare, Ajinkya; Musacchio, Stefano; Vincenzi, Dario

    2014-01-01

    International audience; We investigate the statistical properties of Lagrangian tracers transported by a time-correlated compressible renewing flow. We show that the preferential sampling of the phase space performed by tracers yields significant differences between the Lagrangian statistics and its Eulerian counterpart. In particular, the effective compressibility experienced by tracers has a non-trivial dependence on the time correlation of the flow. We examine the consequence of this pheno...

  8. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    Science.gov (United States)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  9. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  10. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  11. Phase transitions during compression and decompression of clots from platelet-poor plasma, platelet-rich plasma and whole blood.

    Science.gov (United States)

    Liang, Xiaojun; Chernysh, Irina; Purohit, Prashant K; Weisel, John W

    2017-09-15

    Blood clots are required to stem bleeding and are subject to a variety of stresses, but they can also block blood vessels and cause heart attacks and ischemic strokes. We measured the compressive response of human platelet-poor plasma (PPP) clots, platelet-rich plasma (PRP) clots and whole blood clots and correlated these measurements with confocal and scanning electron microscopy to track changes in clot structure. Stress-strain curves revealed four characteristic regions, for compression-decompression: (1) linear elastic region; (2) upper plateau or softening region; (3) non-linear elastic region or re-stretching of the network; (4) lower plateau in which dissociation of some newly made connections occurs. Our experiments revealed that compression proceeds by the passage of a phase boundary through the clot separating rarefied and densified phases. This observation motivates a model of fibrin mechanics based on the continuum theory of phase transitions, which accounts for the pre-stress caused by platelets, the adhesion of fibrin fibers in the densified phase, the compression of red blood cells (RBCs), and the pumping of liquids through the clot during compression/decompression. Our experiments and theory provide insights into the mechanical behavior of blood clots that could have implications clinically and in the design of fibrin-based biomaterials. The objective of this paper is to measure and mathematically model the compression behavior of various human blood clots. We show by a combination of confocal and scanning electron microscopy that compression proceeds by the passage of a front through the sample that separates a densified region of the clot from a rarefied region, and that the compression/decompression response is reversible with hysteresis. These observations form the basis of a model for the compression response of clots based on the continuum theory of phase transitions. Our studies may reveal how clot rheology under large compression in vivo due

  12. Compressed sensing techniques for receiver based post-compensation of transmitter's nonlinear distortions in OFDM systems

    KAUST Repository

    Owodunni, Damilola S.; Ali, Anum Z.; Quadeer, Ahmed Abdul; Al-Safadi, Ebrahim B.; Hammi, Oualid; Al-Naffouri, Tareq Y.

    2014-01-01

    -domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional

  13. Compressive strength improvement for recycled concrete aggregate

    Directory of Open Access Journals (Sweden)

    Mohammed Dhiyaa

    2018-01-01

    Full Text Available Increasing amount of construction waste and, concrete remnants, in particular pose a serious problem. Concrete waste exist in large amounts, do not decay and need long time for disintegration. Therefore, in this work old demolished concrete is crashed and recycled to produce recycled concrete aggregate which can be reused in new concrete production. The effect of using recycled aggregate on concrete compressive strength has been experimentally investigated; silica fume admixture also is used to improve recycled concrete aggregate compressive strength. The main parameters in this study are recycled aggregate and silica fume admixture. The percent of recycled aggregate ranged from (0-100 %. While the silica fume ranged from (0-10 %. The experimental results show that the average concrete compressive strength decreases from 30.85 MPa to 17.58 MPa when the recycled aggregate percentage increased from 0% to 100%. While, when silica fume is used the concrete compressive strength increase again to 29.2 MPa for samples with 100% of recycled aggregate.

  14. Compression of surface myoelectric signals using MP3 encoding.

    Science.gov (United States)

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  15. A Posteriori Restoration of Block Transform-Compressed Data

    Science.gov (United States)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  16. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yihang Yin

    2015-08-01

    Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  17. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    Science.gov (United States)

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  18. Compressive sensing for sparse time-frequency representation of nonstationary signals in the presence of impulsive noise

    Science.gov (United States)

    Orović, Irena; Stanković, Srdjan; Amin, Moeness

    2013-05-01

    A modified robust two-dimensional compressive sensing algorithm for reconstruction of sparse time-frequency representation (TFR) is proposed. The ambiguity function domain is assumed to be the domain of observations. The two-dimensional Fourier bases are used to linearly relate the observations to the sparse TFR, in lieu of the Wigner distribution. We assume that a set of available samples in the ambiguity domain is heavily corrupted by an impulsive type of noise. Consequently, the problem of sparse TFR reconstruction cannot be tackled using standard compressive sensing optimization algorithms. We introduce a two-dimensional L-statistics based modification into the transform domain representation. It provides suitable initial conditions that will produce efficient convergence of the reconstruction algorithm. This approach applies sorting and weighting operations to discard an expected amount of samples corrupted by noise. The remaining samples serve as observations used in sparse reconstruction of the time-frequency signal representation. The efficiency of the proposed approach is demonstrated on numerical examples that comprise both cases of monocomponent and multicomponent signals.

  19. Compression response of tri-axially braided textile composites

    Science.gov (United States)

    Song, Shunjun

    2007-12-01

    This thesis is concerned with characterizing the compression stiffness and compression strength of 2D tri-axially braided textile composites (2DTBC). Two types of 2DTBC are considered differing only on the resin type, while the textile fiber architecture is kept the same with bias tows at 45 degrees to the axial tows. Experimental, analytical and computational methods are described based on the results generated in this study. Since these composites are manufactured using resin transfer molding, the intended and as manufactured composite samples differ in their microstructure due to consolidation and thermal history effects in the manufacturing cycle. These imperfections are measured and the effect of these imperfections on the compression stiffness and strength are characterized. Since the matrix is a polymer material, the nonuniform thermal history undergone by the polymer at manufacturing (within the composite and in the presence of fibers) renders its properties to be non-homogenous. The effects of these non-homogeneities are captured through the definition of an equivalent in-situ matrix material. A method to characterize the mechanical properties of the in-situ matrix is also described. Fiber tow buckling, fiber tow kinking and matrix microcracking are all observed in the experiments. These failure mechanisms are captured through a computational model that uses the finite element (FE) technique to discretize the structure. The FE equations are solved using the commercial software ABAQUS version 6.5. The fiber tows are modeled as transversely isotropic elastic-plastic solids and the matrix is modeled as an isotropic elastic-plastic solid with and without microcracking damage. Because the 2DTBC is periodic, the question of how many repeat units are necessary to model the compression stiffness and strength are examined. Based on the computational results, the correct representative unit cell for this class of materials is identified. The computational models and

  20. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Rung-Ching Chen

    2009-01-01

    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  1. Optimized Projection Matrix for Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Jianping Xu

    2010-01-01

    Full Text Available Compressive sensing (CS is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  2. Quality and loudness judgments for music subjected to compression limiting.

    Science.gov (United States)

    Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M

    2012-08-01

    Dynamic-range compression (DRC) is used in the music industry to maximize loudness. The amount of compression applied to commercial recordings has increased over time due to a motivating perspective that louder music is always preferred. In contrast to this viewpoint, artists and consumers have argued that using large amounts of DRC negatively affects the quality of music. However, little research evidence has supported the claims of either position. The present study investigated how DRC affects the perceived loudness and sound quality of recorded music. Rock and classical music samples were peak-normalized and then processed using different amounts of DRC. Normal-hearing listeners rated the processed and unprocessed samples on overall loudness, dynamic range, pleasantness, and preference, using a scaled paired-comparison procedure in two conditions: un-equalized, in which the loudness of the music samples varied, and loudness-equalized, in which loudness differences were minimized. Results indicated that a small amount of compression was preferred in the un-equalized condition, but the highest levels of compression were generally detrimental to quality, whether loudness was equalized or varied. These findings are contrary to the "louder is better" mentality in the music industry and suggest that more conservative use of DRC may be preferred for commercial music.

  3. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal; Henkel, Jö rg

    2010-01-01

    % for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures, namely ARM and MIPS. © 2010 ACM.

  4. Single-Site Active Iron-Based Bifunctional Oxygen Catalyst for a Compressible and Rechargeable Zinc-Air Battery.

    Science.gov (United States)

    Ma, Longtao; Chen, Shengmei; Pei, Zengxia; Huang, Yan; Liang, Guojin; Mo, Funian; Yang, Qi; Su, Jun; Gao, Yihua; Zapien, Juan Antonio; Zhi, Chunyi

    2018-02-27

    The exploitation of a high-efficient, low-cost, and stable non-noble-metal-based catalyst with oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) simultaneously, as air electrode material for a rechargeable zinc-air battery is significantly crucial. Meanwhile, the compressible flexibility of a battery is the prerequisite of wearable or/and portable electronics. Herein, we present a strategy via single-site dispersion of an Fe-N x species on a two-dimensional (2D) highly graphitic porous nitrogen-doped carbon layer to implement superior catalytic activity toward ORR/OER (with a half-wave potential of 0.86 V for ORR and an overpotential of 390 mV at 10 mA·cm -2 for OER) in an alkaline medium. Furthermore, an elastic polyacrylamide hydrogel based electrolyte with the capability to retain great elasticity even under a highly corrosive alkaline environment is utilized to develop a solid-state compressible and rechargeable zinc-air battery. The creatively developed battery has a low charge-discharge voltage gap (0.78 V at 5 mA·cm -2 ) and large power density (118 mW·cm -2 ). It could be compressed up to 54% strain and bent up to 90° without charge/discharge performance and output power degradation. Our results reveal that single-site dispersion of catalytic active sites on a porous support for a bifunctional oxygen catalyst as cathode integrating a specially designed elastic electrolyte is a feasible strategy for fabricating efficient compressible and rechargeable zinc-air batteries, which could enlighten the design and development of other functional electronic devices.

  5. Statistics-Based Compression of Global Wind Fields

    KAUST Repository

    Jeong, Jaehong

    2017-02-07

    Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth\\'s orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.

  6. Statistics-Based Compression of Global Wind Fields

    KAUST Repository

    Jeong, Jaehong; Castruccio, Stefano; Crippa, Paola; Genton, Marc G.

    2017-01-01

    Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth's orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.

  7. Compression of TPC data in the ALICE experiment

    International Nuclear Information System (INIS)

    Nicolaucig, A.; Mattavelli, M.; Carrato, S.

    2002-01-01

    In this paper two algorithms for the compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN are described. The first algorithm is based on a lossless source code modeling technique, i.e. the original TPC signal information can be reconstructed without errors at the decompression stage. The source model exploits the temporal correlation that is present in the TPC data to reduce the entropy of the source. The second algorithm is based on a source model which is lossy if samples of the TPC signal are considered one by one. Conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse. Obviously entropy coding is applied to the set of events defined by the two source models to reduce the bit rate to the corresponding source entropy. Using TPC simulated data according to the expected ALICE TPC performance, the lossless and the lossy compression algorithms achieve a data reduction, respectively, to 49.2% and in the range of 34.2% down to 23.7% of the original data rate. The number of operations per input symbol required to implement the compression stage for both algorithms is relatively low, so that a real-time implementation embedded in the TPC data acquisition chain using low-cost integrated electronics is a realistic option to effectively reduce the data storing cost of ALICE experiment

  8. Compressive properties of sandwiches with functionally graded ...

    Indian Academy of Sciences (India)

    319–328. c Indian Academy of Sciences. Compressive properties ... †Mechanical Engineering, National Institute of Technology Karnataka, Surathkal, India .... spheres) which might aid in building FG composites is not explored ... Sample code.

  9. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    Science.gov (United States)

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  10. Compressive properties of silica aerogel at 295, 76, and 20K

    International Nuclear Information System (INIS)

    Arvidson, J.M.; Scull, L.L.

    1986-01-01

    Specimens of silica aerogel were tested in compression at 295, 76, and 20 K in a helium gas environment. The properties reported include Young's modulus, the proportional limit, and yield strength. Compressive stress-versus-strain curves at these temperatures are also given. A test apparatus was developed specifically to determine the compressive properties of low strength materials. To measure specimen strain a concentric, overlapping-cylinder, capacitance extensometer was developed. This frictionless device has the capability to conduct variable temperature tests at any temperature from 1.8 to 295 K. Results from the compression tests indicate that at low temperatures the material is not only stronger, but tougher. During 295-K compression tests, the samples fractured and, in some cases, crumbled. After 76- or 20-K compression tests, the specimens remained intact

  11. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  12. Digital cinema video compression

    Science.gov (United States)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  13. Dynamic characterization and modeling of magneto-rheological elastomers under compressive loadings

    International Nuclear Information System (INIS)

    Koo, Jeong-Hoi; Khan, Fazeel; Jang, Dong-Doo; Jung, Hyung-Jo

    2010-01-01

    The primary goal of the research reported in this paper has been to characterize and model the compression properties of magneto-rheological elastomers (MREs). MRE samples were fabricated by curing a two-component elastomer resin with 30% content of 10 µm sized iron particles by volume. In order to vary the magnetic field during compressive testing, a test fixture was designed and fabricated in which two permanent magnets could be variably positioned on either side of the specimen. Changing the distance between the magnets of the fixture allowed the strength of the magnetic field passing uniformly through the sample to be varied. Using this test setup and a dynamic test frame, a series of compression tests of MRE samples were performed, by varying the magnetic field and the frequency of loading. The results show that the MR effect (per cent increase in the material 'stiffness') increases as the magnetic field increases and the loading frequency increases within the range of the magnetic field and input frequency considered in this study. Furthermore, a phenomenological model was developed to capture the dynamic behaviors of the MREs under compression loadings. (technical note)

  14. Efficient Compression of Far Field Matrices in Multipole Algorithms based on Spherical Harmonics and Radiating Modes

    Directory of Open Access Journals (Sweden)

    A. Schroeder

    2012-09-01

    Full Text Available This paper proposes a compression of far field matrices in the fast multipole method and its multilevel extension for electromagnetic problems. The compression is based on a spherical harmonic representation of radiation patterns in conjunction with a radiating mode expression of the surface current. The method is applied to study near field effects and the far field of an antenna placed on a ship surface. Furthermore, the electromagnetic scattering of an electrically large plate is investigated. It is demonstrated, that the proposed technique leads to a significant memory saving, making multipole algorithms even more efficient without compromising the accuracy.

  15. ECF2: A pulsed power generator based on magnetic flux compression for K-shell radiation production

    International Nuclear Information System (INIS)

    L'Eplattenier, P.; Lassalle, F.; Mangeant, C.; Hamann, F.; Bavay, M.; Bayol, F.; Huet, D.; Morell, A.; Monjaux, P.; Avrillaud, G.; Lalle, B.

    2002-01-01

    The 3 MJ energy stored ECF2 generator is developed at Centre d'Etudes de Gramat, France, for K-shell radiation production. This generator is based on microsecond LTD stages as primary generators, and on the magnetic flux compression scheme for power amplification from the microsecond to the 100ns regime. This paper presents a general overview of the ECF2 generator. The flux compression stage, a key component, will be studied in details. We will present its advantages and drawbacks. We will then present the first experimental and numerical results which show the improvements that have already been made on this scheme

  16. Normalized compression distance of multisets with applications

    NARCIS (Netherlands)

    Cohen, A.R.; Vitányi, P.M.B.

    Pairwise normalized compression distance (NCD) is a parameter-free, feature-free, alignment-free, similarity metric based on compression. We propose an NCD of multisets that is also metric. Previously, attempts to obtain such an NCD failed. For classification purposes it is superior to the pairwise

  17. The compressed word problem for groups

    CERN Document Server

    Lohrey, Markus

    2014-01-01

    The Compressed Word Problem for Groups provides a detailed exposition of known results on the compressed word problem, emphasizing efficient algorithms for the compressed word problem in various groups. The author presents the necessary background along with the most recent results on the compressed word problem to create a cohesive self-contained book accessible to computer scientists as well as mathematicians. Readers will quickly reach the frontier of current research which makes the book especially appealing for students looking for a currently active research topic at the intersection of group theory and computer science. The word problem introduced in 1910 by Max Dehn is one of the most important decision problems in group theory. For many groups, highly efficient algorithms for the word problem exist. In recent years, a new technique based on data compression for providing more efficient algorithms for word problems, has been developed, by representing long words over group generators in a compres...

  18. The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.

    Science.gov (United States)

    Ma, Teng; Li, Hui; Yang, Hao; Lv, Xulin; Li, Peiyang; Liu, Tiejun; Yao, Dezhong; Xu, Peng

    2017-01-01

    Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  20. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Liantao Wu

    2015-08-01

    Full Text Available Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links.

  1. Admixing dredged marine clay with cement-bentonite for reduction of compressibility

    Science.gov (United States)

    Rahilman, Nur Nazihah Nur; Chan, Chee-Ming

    2017-11-01

    Cement-based solidification/stabilization is a method that is widely used for the treatment of dredged marine clay. The key objective for solidification/stabilization is to improve the engineering properties of the originally soft, weak material. Dredged materials are normally low in shear strength and bearing capacity while high incompressibility. In order to improve the material's properties for possible reuse, a study on the one-dimensional compressibility of lightly solidified dredged marine clay admixed with bentonite was conducted. On the other hand, due to the viscous nature, particularly the swelling property, bentonite is a popular volumising agent for backfills. In the present study, standard oedometer test was carried out to examine the compressibility of the treated sample. Complementary strength measurements were also conducted with laboratory vane shear setup on both the untreated and treated dredged marine clay. The results showed that at the same binder content, the addition of bentonite contributed significantly to the reduction of compressibility and rise in undrained shear strength. These improved properties made the otherwise discarded dredged marine soils potentially reusable for reclamation works, for instance.

  2. The relationship between vickers microhardness and compressive strength of functional surface geopolymers

    Science.gov (United States)

    Subaer, Ekaputri, Januari Jaya; Fansuri, Hamzah; Abdullah, Mustafa Al Bakri

    2017-09-01

    An experimental study to investigate the relationship between Vickers microhardness and compressive strength of geopolymers made from metakaolin has been conducted. Samples were prepared by using metakaolin activated with a sodium silicate solution at a different ratio of Si to Al and Na to Al and cured at 70oC for one hour. The resulting geopolymers were stored in an open air for 28 days before conducting any measurement. Bulk density and apparent porosity of the samples were measured by using Archimedes's method. Vickers microhardness measurements were performed on a polished surface of geopolymers with a load ranging from 0.3 - 1.0 kg. The topographic of indented samples were examined by using scanning electron microscopy (SEM). Compressive strength of the resulting geopolymers was measured on the cylindrical samples with a ratio of height to the diameter was 2:1. The results showed that the molar ratios of geopolymers compositions play important roles in the magnitude of bulk density, porosity, Vickers's microhardness as well as the compressive strength. The porosity reduced exponentially the magnitude of the strength of geopolymers. It was found that the relationship between Vickers microhardness and compressive strength was linear. At the request of all authors and with the approval of the proceedings editor, article 020188 titled, "The relationship between vickers microhardness and compressive strength of functional surface geopolymers," is being retracted from the public record due to the fact that it is a duplication of article 020170 published in the same volume.

  3. A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs

    Directory of Open Access Journals (Sweden)

    Yu Zheng

    2017-06-01

    Full Text Available In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.

  4. Multichannel compressive sensing MRI using noiselet encoding.

    Directory of Open Access Journals (Sweden)

    Kamlesh Pawar

    Full Text Available The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS. In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  5. Optimization of compressive strength in admixture-reinforced cement-based grouts

    Directory of Open Access Journals (Sweden)

    Sahin Zaimoglu, A.

    2007-12-01

    Full Text Available The Taguchi method was used in this study to optimize the unconfined (7-, 14- and 28-day compressive strength of cement-based grouts with bentonite, fly ash and silica fume admixtures. The experiments were designed using an L16 orthogonal array in which the three factors considered were bentonite (0%, 0.5%, 1.0% and 3%, fly ash (10%, 20%, 30% and 40% and silica fume (0%, 5%, 10% and 20% content. The experimental results, which were analyzed by ANOVA and the Taguchi method, showed that fly ash and silica fume content play a significant role in unconfined compressive strength. The optimum conditions were found to be: 0% bentonite, 10% fly ash, 20% silica fume and 28 days of curing time. The maximum unconfined compressive strength reached under the above optimum conditions was 17.1 MPa.En el presente trabajo se ha intentado optimizar, mediante el método de Taguchi, las resistencias a compresión (a las edades de 7, 14 y 28 días de lechadas de cemento reforzadas con bentonita, cenizas volantes y humo de sílice. Se diseñaron los experimentos de acuerdo con un arreglo ortogonal tipo L16 en el que se contemplaban tres factores: la bentonita (0, 0,5, 1 y 3%, las cenizas volantes (10, 20, 30 y 40% y el humo de sílice (0, 5, 10 y 20% (porcentajes en peso del sólido. Los datos obtenidos se analizaron con mediante ANOVA y el método de Taguchi. De acuerdo con los resultados experimentales, el contenido tanto de cenizas volantes como de humo de sílice desempeña un papel significativo en la resistencia a compresión. Por otra parte, las condiciones óptimas que se han identificado son: 0% bentonita, 10% cenizas volantes, 20% humo de sílice y 28 días de tiempo de curado. La resistencia a compresión máxima conseguida en las anteriores condiciones era de 17,1 MPa.

  6. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    Science.gov (United States)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  7. Comparison of JPEG and wavelet compression on intraoral digital radiographic images

    International Nuclear Information System (INIS)

    Kim, Eun Kyung

    2004-01-01

    To determine the proper image compression method and ratio without image quality degradation in intraoral digital radiographic images, comparing the discrete cosine transform (DCT)-based JPEG with the wavelet-based JPEG 2000 algorithm. Thirty extracted sound teeth and thirty extracted teeth with occlusal caries were used for this study. Twenty plaster blocks were made with three teeth each. They were radiographically exposed using CDR sensors (Schick Inc., Long Island, USA). Digital images were compressed to JPEG format, using Adobe Photoshop v. 7.0 and JPEG 2000 format using Jasper program with compression ratios of 5 : 1, 9 : 1, 14 : 1, 28 : 1 each. To evaluate the lesion detectability, receiver operating characteristic (ROC) analysis was performed by the three oral and maxillofacial radiologists. To evaluate the image quality, all the compressed images were assessed subjectively using 5 grades, in comparison to the original uncompressed images. Compressed images up to compression ratio of 14: 1 in JPEG and 28 : 1 in JPEG 2000 showed nearly the same the lesion detectability as the original images. In the subjective assessment of image quality, images up to compression ratio of 9 : 1 in JPEG and 14 : 1 in JPEG 2000 showed minute mean paired differences from the original images. The results showed that the clinically acceptable compression ratios were up to 9 : 1 for JPEG and 14 : 1 for JPEG 2000. The wavelet-based JPEG 2000 is a better compression method, comparing to DCT-based JPEG for intraoral digital radiographic images.

  8. Novel prediction- and subblock-based algorithm for fractal image compression

    International Nuclear Information System (INIS)

    Chung, K.-L.; Hsu, C.-H.

    2006-01-01

    Fractal encoding is the most consuming part in fractal image compression. In this paper, a novel two-phase prediction- and subblock-based fractal encoding algorithm is presented. Initially the original gray image is partitioned into a set of variable-size blocks according to the S-tree- and interpolation-based decomposition principle. In the first phase, each current block of variable-size range block tries to find the best matched domain block based on the proposed prediction-based search strategy which utilizes the relevant neighboring variable-size domain blocks. The first phase leads to a significant computation-saving effect. If the domain block found within the predicted search space is unacceptable, in the second phase, a subblock strategy is employed to partition the current variable-size range block into smaller blocks to improve the image quality. Experimental results show that our proposed prediction- and subblock-based fractal encoding algorithm outperforms the conventional full search algorithm and the recently published spatial-correlation-based algorithm by Truong et al. in terms of encoding time and image quality. In addition, the performance comparison among our proposed algorithm and the other two algorithms, the no search-based algorithm and the quadtree-based algorithm, are also investigated

  9. Compressive strength and magnetic properties of calcium silicate-zirconia-iron (III) oxide composite cements

    Science.gov (United States)

    Ridzwan, Hendrie Johann Muhamad; Shamsudin, Roslinda; Ismail, Hamisah; Yusof, Mohd Reusmaazran; Hamid, Muhammad Azmi Abdul; Awang, Rozidawati Binti

    2018-04-01

    In this study, ZrO2 microparticles and γ-Fe2O3 nanoparticles have been added into calcium silicate based cements. The purpose of this experiment was to investigate the compressive strength and magnetic properties of the prepared composite cement. Calcium silicate (CAS) powder was prepared by hydrothermal method. SiO2 and CaO obtained from rice husk ash and limestone respectively were autoclaved at 135 °C for 8 h and sintered at 950°C to obtain CAS powder. SiO2:CaO ratio was set at 45:55. CAS/ZrO2 sample were prepared with varying ZrO2 microparticles concentrations by 0-40 wt. %. Compressive strength value of CAS/ZrO2 cements range from 1.44 to 2.44 MPa. CAS/ZrO2/γ-Fe2O3 sample with 40 wt. % ZrO2 were prepared with varying γ-Fe2O3 nanoparticles concentrations (1-5 wt. %). The additions of γ-Fe2O3 nanoparticles showed up to twofold increase in the compressive strength of the cement. X-Ray diffraction (XRD) results confirm the formation of mixed phases in the produced composite cements. Vibrating sample magnetometer (VSM) analysis revealed that the ferromagnetic behaviour has been observed in CAS/ZrO2/γ-Fe2O3 composite cements.

  10. The effect of the volume fraction and viscosity on the compression and tension behavior of the cobalt-ferrite magneto-rheological fluids

    Directory of Open Access Journals (Sweden)

    H. Shokrollahi

    2016-03-01

    Full Text Available The purpose of this work is to investigate the effects of the volume fraction and bimodal distribution of solid particles on the compression and tension behavior of the Co-ferrite-based magneto-rheological fluids (MRFs containing silicon oil as a carrier. Hence, Co-ferrite particles (CoFe2O4 with two various sizes were synthesized by the chemical co-precipitation method and mixed so as to prepare the bimodal MRF. The X-Ray Diffraction (XRD analysis, Fourier Transform Infrared Spectroscopy (FTIR, Laser Particle Size Analysis (LPSA and Vibrating Sample Magnetometer (VSM were conducted to examine the structural and magnetic properties, respectively. The results indicated that the increase of the volume fraction has a direct increasing influence on the values of the compression and tension strengths of fluids. In addition, the compression and tension strengths of the mixed MRF sample (1.274 and 0.647 MPa containing 60 and 550 nm samples were higher than those of the MRF sample with the same volume fraction and uniform particle size of 550 nm.

  11. Fracture Behaviours in Compression-loaded Triangular Corrugated Core Sandwich Panels

    Directory of Open Access Journals (Sweden)

    Zaid N.Z.M.

    2016-01-01

    Full Text Available The failure modes occurring in sandwich panels based on the corrugations of aluminium alloy, carbon fibre-reinforced plastic (CFRP and glass fibre-reinforced plastic (GFRP are analysed in this work. The fracture behaviour of these sandwich panels under compressive stresses is determined through a series of uniform lateral compression performed on samples with different cell wall thicknesses. Compression test on the corrugated-core sandwich panels were conducted using an Instron series 4505 testing machine. The post-failure examinations of the corrugated-core in different cell wall thickness were conducted using optical microscope. Load-displacement graphs of aluminium alloy, GFRP and CFRP specimens were plotted to show progressive damage development with five unit cells. Four modes of failure were described in the results: buckling, hinges, delamination and debonding. Each of these failure modes may dominate under different cell wall thickness or loading condition, and they may act in combination. The results indicate that thicker composites corrugated-core panels tend can recover more stress and retain more stiffness. This analysis provides a valuable insight into the mechanical behaviour of corrugated-core sandwich panels for use in lightweight engineering applications.

  12. Subband Coding Methods for Seismic Data Compression

    Science.gov (United States)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  13. A review on the recent development of solar absorption and vapour compression based hybrid air conditioning with low temperature storage

    Directory of Open Access Journals (Sweden)

    Noor D. N.

    2016-01-01

    Full Text Available Conventional air conditioners or vapour compression systems are main contributors to energy consumption in modern buildings. There are common environmental issues emanating from vapour compression system such as greenhouse gas emission and heat wastage. These problems can be reduced by adaptation of solar energy components to vapour compression system. However, intermittence input of daily solar radiation was the main issue of solar energy system. This paper presents the recent studies on hybrid air conditioning system. In addition, the basic vapour compression system and components involved in the solar air conditioning system are discussed. Introduction of low temperature storage can be an interactive solution and improved economically which portray different modes of operating strategies. Yet, very few studies have examined on optimal operating strategies of the hybrid system. Finally, the findings of this review will help suggest optimization of solar absorption and vapour compression based hybrid air conditioning system for future work while considering both economic and environmental factors.

  14. Is breast compression associated with breast cancer detection and other early performance measures in a population-based breast cancer screening program?

    Science.gov (United States)

    Moshina, Nataliia; Sebuødegård, Sofie; Hofvind, Solveig

    2017-06-01

    We aimed to investigate early performance measures in a population-based breast cancer screening program stratified by compression force and pressure at the time of mammographic screening examination. Early performance measures included recall rate, rates of screen-detected and interval breast cancers, positive predictive value of recall (PPV), sensitivity, specificity, and histopathologic characteristics of screen-detected and interval breast cancers. Information on 261,641 mammographic examinations from 93,444 subsequently screened women was used for analyses. The study period was 2007-2015. Compression force and pressure were categorized using tertiles as low, medium, or high. χ 2 test, t tests, and test for trend were used to examine differences between early performance measures across categories of compression force and pressure. We applied generalized estimating equations to identify the odds ratios (OR) of screen-detected or interval breast cancer associated with compression force and pressure, adjusting for fibroglandular and/or breast volume and age. The recall rate decreased, while PPV and specificity increased with increasing compression force (p for trend screen-detected cancer, PPV, sensitivity, and specificity decreased with increasing compression pressure (p for trend breast cancer compared with low compression pressure (1.89; 95% CI 1.43-2.48). High compression force and low compression pressure were associated with more favorable early performance measures in the screening program.

  15. Efficient traveltime compression for 3D prestack Kirchhoff migration

    KAUST Repository

    Alkhalifah, Tariq

    2010-12-13

    Kirchhoff 3D prestack migration, as part of its execution, usually requires repeated access to a large traveltime table data base. Access to this data base implies either a memory intensive or I/O bounded solution to the storage problem. Proper compression of the traveltime table allows efficient 3D prestack migration without relying on the usually slow access to the computer hard drive. Such compression also allows for faster access to desirable parts of the traveltime table. Compression is applied to the traveltime field for each source location on the surface on a regular grid using 3D Chebyshev polynomial or cosine transforms of the traveltime field represented in the spherical coordinates or the Celerity domain. We obtain practical compression levels up to and exceeding 20 to 1. In fact, because of the smaller size traveltime table, we obtain exceptional traveltime extraction speed during migration that exceeds conventional methods. Additional features of the compression include better interpolation of traveltime tables and more stable estimates of amplitudes from traveltime curvatures. Further compression is achieved using bit encoding, by representing compression parameters values with fewer bits. © 2010 European Association of Geoscientists & Engineers.

  16. High-speed and high-ratio referential genome compression.

    Science.gov (United States)

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. Time-resolved shock compression of porous rutile: Wave dispersion in porous solids

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, M.U.; Graham, R.A.; Holman, G.T.

    1993-08-01

    Rutile (TiO{sub 2}) samples at 60% of solid density have been shock-loaded from 0.21 to 6.1 GPa with sample thickness of 4 mm and studied with the PVDF piezoelectric polymer stress-rate gauge. The technique uses a copper capsule to contain the sample which has PVDF gauge packages in direct contact with front and rear surfaces. A precise measure is made of the compressive stress wave velocity through the sample, as well as the input and propagated shock stress. Initial density is known from sample preparation, and the amount of shock-compression is calculated from the measurement of shock velocity and input stress. Shock states and re-shock states are measured. Observed data are consistent with previously published high pressure data. It is observed that rutile has a ``crush strength`` near 6 GPa. Propagated stress-pulse rise times vary from 234 to 916 nsec. Propagated stress-pulse rise times of shock-compressed HMX, 2Al + Fe{sub 2}O{sub 3}, 3Ni + Al, and 5Ti + 3Si are presented.

  18. Three-dimensional range data compression using computer graphics rendering pipeline.

    Science.gov (United States)

    Zhang, Song

    2012-06-20

    This paper presents the idea of naturally encoding three-dimensional (3D) range data into regular two-dimensional (2D) images utilizing computer graphics rendering pipeline. The computer graphics pipeline provides a means to sample 3D geometry data into regular 2D images, and also to retrieve the depth information for each sampled pixel. The depth information for each pixel is further encoded into red, green, and blue color channels of regular 2D images. The 2D images can further be compressed with existing 2D image compression techniques. By this novel means, 3D geometry data obtained by 3D range scanners can be instantaneously compressed into 2D images, providing a novel way of storing 3D range data into its 2D counterparts. We will present experimental results to verify the performance of this proposed technique.

  19. Lossy compression of TPC data and trajectory tracking efficiency for the ALICE experiment

    International Nuclear Information System (INIS)

    Nicolaucig, A.; Ivanov, M.; Mattavelli, M.

    2003-01-01

    In this paper a quasi-lossless algorithm for the on-line compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN is described. The algorithm is based on a lossy source code modeling technique, i.e. it is based on a source model which is lossy if samples of the TPC signal are considered one by one; conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse, representing the pulse charge and the time localization of the pulse. So as to evaluate the consequences of the error introduced by the lossy compression process, the results of the trajectory tracking algorithms that process data off-line after the experiment are analyzed, in particular, versus their sensibility to the noise introduced by the compression. Two different versions of these off-line algorithms are described, performing cluster finding and particle tracking. The results on how these algorithms are affected by the lossy compression are reported. Entropy coding can be applied to the set of events defined by the source model to reduce the bit rate to the corresponding source entropy. Using TPC simulated data according to the expected ALICE TPC performance, the compression algorithm achieves a data reduction in the range of 34.2% down to 23.7% of the original data rate depending on the desired precision on the pulse center of mass. The number of operations per input symbol required to implement the algorithm is relatively low, so that a real-time implementation of the compression process embedded in the TPC data acquisition chain using low-cost integrated electronics is a realistic option to effectively reduce the data storing cost of ALICE experiment

  20. Linear chemically sensitive electron tomography using DualEELS and dictionary-based compressed sensing

    Energy Technology Data Exchange (ETDEWEB)

    AlAfeef, Ala, E-mail: a.al-afeef.1@research.gla.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); School of Computing Science, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Bobynko, Joanna [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Cockshott, W. Paul. [School of Computing Science, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Craven, Alan J. [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom); Zuazo, Ian; Barges, Patrick [ArcelorMittal Maizières Research, Maizières-lès-Metz 57283 (France); MacLaren, Ian, E-mail: ian.maclaren@glasgow.ac.uk [SUPA School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom)

    2016-11-15

    We have investigated the use of DualEELS in elementally sensitive tilt series tomography in the scanning transmission electron microscope. A procedure is implemented using deconvolution to remove the effects of multiple scattering, followed by normalisation by the zero loss peak intensity. This is performed to produce a signal that is linearly dependent on the projected density of the element in each pixel. This method is compared with one that does not include deconvolution (although normalisation by the zero loss peak intensity is still performed). Additionally, we compare the 3D reconstruction using a new compressed sensing algorithm, DLET, with the well-established SIRT algorithm. VC precipitates, which are extracted from a steel on a carbon replica, are used in this study. It is found that the use of this linear signal results in a very even density throughout the precipitates. However, when deconvolution is omitted, a slight density reduction is observed in the cores of the precipitates (a so-called cupping artefact). Additionally, it is clearly demonstrated that the 3D morphology is much better reproduced using the DLET algorithm, with very little elongation in the missing wedge direction. It is therefore concluded that reliable elementally sensitive tilt tomography using EELS requires the appropriate use of DualEELS together with a suitable reconstruction algorithm, such as the compressed sensing based reconstruction algorithm used here, to make the best use of the limited data volume and signal to noise inherent in core-loss EELS. - Highlights: • DualEELS is essential for chemically sensitive electron tomography using EELS. • A new compressed sensing based algorithm (DLET) gives high fidelity reconstruction. • This combination of DualEELS and DLET will give reliable results from few projections.

  1. Mathematical transforms and image compression: A review

    Directory of Open Access Journals (Sweden)

    Satish K. Singh

    2010-07-01

    Full Text Available It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed.

  2. Compressive Load Resistance Characteristics of Rice Grain

    OpenAIRE

    Sumpun Chaitep; Chaiy R. Metha Pathawee; Pipatpong Watanawanyoo

    2008-01-01

    Investigation was made to observe the compressive load property of rice gain both rough rice and brown grain. Six rice varieties (indica and japonica) were examined with the moisture content at 10-12%. A compressive load with reference to a principal axis normal to the thickness of the grain were conducted at selected inclined angles of 0°, 15°, 30°, 45°, 60° and 70°. The result showed the compressive load resistance of rice grain based on its characteristic of yield s...

  3. Method and algorithm for efficient calibration of compressive hyperspectral imaging system based on a liquid crystal retarder

    Science.gov (United States)

    Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian

    2017-09-01

    Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.

  4. Multi-objective optimization and exergoeconomic analysis of a combined cooling, heating and power based compressed air energy storage system

    International Nuclear Information System (INIS)

    Yao, Erren; Wang, Huanran; Wang, Ligang; Xi, Guang; Maréchal, François

    2017-01-01

    Highlights: • A novel tri-generation based compressed air energy storage system. • Trade-off between efficiency and cost to highlight the best compromise solution. • Components with largest irreversibility and potential improvements highlighted. - Abstract: Compressed air energy storage technologies can improve the supply capacity and stability of the electricity grid, particularly when fluctuating renewable energies are massively connected. While incorporating the combined cooling, heating and power systems into compressed air energy storage could achieve stable operation as well as efficient energy utilization. In this paper, a novel combined cooling, heating and power based compressed air energy storage system is proposed. The system combines a gas engine, supplemental heat exchangers and an ammonia-water absorption refrigeration system. The design trade-off between the thermodynamic and economic objectives, i.e., the overall exergy efficiency and the total specific cost of product, is investigated by an evolutionary multi-objective algorithm for the proposed combined system. It is found that, with an increase in the exergy efficiency, the total product unit cost is less affected in the beginning, while rises substantially afterwards. The best trade-off solution is selected with an overall exergy efficiency of 53.04% and a total product unit cost of 20.54 cent/kWh, respectively. The variation of decision variables with the exergy efficiency indicates that the compressor, turbine and heat exchanger preheating the inlet air of turbine are the key equipment to cost-effectively pursuit a higher exergy efficiency. It is also revealed by an exergoeconomic analysis that, for the best trade-off solution, the investment costs of the compressor and the two heat exchangers recovering compression heat and heating up compressed air for expansion should be reduced (particularly the latter), while the thermodynamic performance of the gas engine need to be improved

  5. Measurement and Improvement the Quality of the Compressive Strength of Product Concrete

    Directory of Open Access Journals (Sweden)

    Zohair Hassan Abdullah

    2018-01-01

    Full Text Available The research dealt with studying path technology to manufacture of concrete cubes according to specification design of Iraq to the degree of concrete C20 No. 52 of 1984, and in which sample was cubic shape and the dimensions (150 × 150 × 150 mm for each dimensions and the proportion of mixing of the concrete is (1:2:4 using in the casting floor. For concrete resistance required that achieve the degree of confidence of 100%, were examined compressive strength 40 samples of concrete cubes of age 28 days in the Labs section of Civil Department – Technical Institute of Babylon, all made from the same mixing concrete. Where, these samples classified within the acceptable tests were adopted in the implementation of investment projects in the construction sector. The research aims first, to measure the compressive strength of concrete cubes because the decrease or increase the compressive strength from specification design contributes to the failure of investment projects in the construction sector therefore, test was classified units that produced within damaged units. Second, to study an improvement the quality of compressive strength of concrete cubes. Results show that the proportion of damaged cubes are 0.00685, compressive strength was achieve confidence level 99.5% and producing of concrete cubes within the acceptable level of quality (3 Sigma. The quality of compressive strength was improved to good level use advanced sigma  levels. DOI: http://dx.doi.org/10.25130/tjes.24.2017.20

  6. Technique of computerized processing of data obtained from gamma-spectrometer based on compressed xenon

    International Nuclear Information System (INIS)

    Vlasik, K.F.; Grachev, V.M.; Dmitrenko, V.V.; Sokolov, D.V.; Ulin, S.E.; Uteshev, Z.M.

    2000-01-01

    Paper describes an algorithm to detect and to identify radionuclides on the basis of γ-spectra derived using a compressed xenon base γ-spectrometer. The algorithm is based on the comparison of the measured γ-spectra with table data on radionuclides. One formulated criteria of comparison. One elaborated a package of programs realizing the algorithm and ensuring implementation of the comprehensive process of γ-spectra processing. The algorithm was evaluated using real spectra. Its applicability and efficiency are demonstrated [ru

  7. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  8. Schwarz-based algorithms for compressible flows

    Energy Technology Data Exchange (ETDEWEB)

    Tidriri, M.D. [ICASE, Hampton, VA (United States)

    1996-12-31

    To compute steady compressible flows one often uses an implicit discretization approach which leads to a large sparse linear system that must be solved at each time step. In the derivation of this system one often uses a defect-correction procedure, in which the left-hand side of the system is discretized with a lower order approximation than that used for the right-hand side. This is due to storage considerations and computational complexity, and also to the fact that the resulting lower order matrix is better conditioned than the higher order matrix. The resulting schemes are only moderately implicit. In the case of structured, body-fitted grids, the linear system can easily be solved using approximate factorization (AF), which is among the most widely used methods for such grids. However, for unstructured grids, such techniques are no longer valid, and the system is solved using direct or iterative techniques. Because of the prohibitive computational costs and large memory requirements for the solution of compressible flows, iterative methods are preferred. In these defect-correction methods, which are implemented in most CFD computer codes, the mismatch in the right and left hand side operators, together with explicit treatment of the boundary conditions, lead to a severely limited CFL number, which results in a slow convergence to steady state aerodynamic solutions. Many authors have tried to replace explicit boundary conditions with implicit ones. Although they clearly demonstrate that high CFL numbers are possible, the reduction in CPU time is not clear cut.

  9. Compression module for the BCM1F microTCA raw data readout

    CERN Document Server

    Dostanic, Milica

    2017-01-01

    BCM1F is a diamond based detector and one of the luminometers and background monitors operated by the BRIL group, part of the CMS experiment. BCM1F's front-end produces analog signals which are digitized in a new microTCA back-end. An FPGA in the back-end part takes care of signal processing and stores raw data. The raw data readout has been improved by implementing a data compression module in the firmware. This module has allowed storing larger amount of data in short time intervals. The module has been implemented in VHDL, using a zero suppression algorithm: only data above a defined threshold is stored into memory, while the samples around the base line are discarded. Thanks to metadata, describing the suppressed data, the shape of input signals and time information are preserved. Tests with simulations and a pulse generator showed good results and proved that the module can achieve large compression factor.

  10. Effect of Kollidon VA®64 particle size and morphology as directly compressible excipient on tablet compression properties.

    Science.gov (United States)

    Chaudhary, R S; Patel, C; Sevak, V; Chan, M

    2018-01-01

    The study evaluates use of Kollidon VA ® 64 and a combination of Kollidon VA ® 64 with Kollidon VA ® 64 Fine as excipient in direct compression process of tablets. The combination of the two grades of material is evaluated for capping, lamination and excessive friability. Inter particulate void space is higher for such excipient due to the hollow structure of the Kollidon VA ® 64 particles. During tablet compression air remains trapped in the blend exhibiting poor compression with compromised physical properties of the tablets. Composition of Kollidon VA ® 64 and Kollidon VA ® 64 Fine is evaluated by design of experiment (DoE). A scanning electron microscopy (SEM) of two grades of Kollidon VA ® 64 exhibits morphological differences between coarse and fine grade. The tablet compression process is evaluated with a mix consisting of entirely Kollidon VA ® 64 and two mixes containing Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23 and 65:35. A statistical modeling on the results from the DoE trials resulted in the optimum composition for direct tablet compression as combination of Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23. This combination compressed with the predicted parameters based on the statistical modeling and applying main compression force between 5 and 15 kN, pre-compression force between 2 and 3 kN, feeder speed fixed at 25 rpm and compression range of 45-49 rpm produced tablets with hardness ranging between 19 and 21 kp, with no friability, capping, or lamination issue.

  11. Image acquisition system using on sensor compressed sampling technique

    Science.gov (United States)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  12. A Novel Object Tracking Algorithm Based on Compressed Sensing and Entropy of Information

    Directory of Open Access Journals (Sweden)

    Ding Ma

    2015-01-01

    Full Text Available Object tracking has always been a hot research topic in the field of computer vision; its purpose is to track objects with specific characteristics or representation and estimate the information of objects such as their locations, sizes, and rotation angles in the current frame. Object tracking in complex scenes will usually encounter various sorts of challenges, such as location change, dimension change, illumination change, perception change, and occlusion. This paper proposed a novel object tracking algorithm based on compressed sensing and information entropy to address these challenges. First, objects are characterized by the Haar (Haar-like and ORB features. Second, the dimensions of computation space of the Haar and ORB features are effectively reduced through compressed sensing. Then the above-mentioned features are fused based on information entropy. Finally, in the particle filter framework, an object location was obtained by selecting candidate object locations in the current frame from the local context neighboring the optimal locations in the last frame. Our extensive experimental results demonstrated that this method was able to effectively address the challenges of perception change, illumination change, and large area occlusion, which made it achieve better performance than existing approaches such as MIL and CT.

  13. Experimental investigation of dynamic compression and spallation of Cerium at pressures up to 6 GPa

    Science.gov (United States)

    Zubareva, A. N.; Kolesnikov, S. A.; Utkin, A. V.

    2014-05-01

    In this study the experiments on one-dimensional dynamic compression of Cerium (Ce) samples to pressures of 0.5 to 6 GPa using various types of explosively driven generators were conducted. VISAR laser velocimeter was used to obtain Ce free surface velocity profiles. The isentropic compression wave was registered for γ-phase of Ce at pressures lower than 0.76 GPa that corresponds to γ-α phase transition pressure in Ce. Shock rarefaction waves were also registered in several experiments. Both observations were the result of the anomalous compressibility of γ-phase of Ce. On the basis of our experimental results the compression isentrope of Ce γ-phase was constructed. Its comparison with volumetric compression curves allowed to estimate the magnitude of shear stress at dynamic compression conditions for Ce. Spall strength measurements were also conducted for several samples. They showed a strong dependence of the spall strength of Ce on the strain rate.

  14. Experimental investigation of dynamic compression and spallation of cerium at pressures up to 6 GPa

    International Nuclear Information System (INIS)

    Zubareva, A N; Kolesnikov, S A; Utkin, A V

    2014-01-01

    In this study the experiments on one-dimensional dynamic compression of Cerium (Ce) samples to pressures of 0.5 to 6 GPa using various types of explosively driven generators were conducted. VISAR laser velocimeter was used to obtain Ce free surface velocity profiles. The isentropic compression wave was registered for γ-phase of Ce at pressures lower than 0.76 GPa that corresponds to γ-α phase transition pressure in Ce. Shock rarefaction waves were also registered in several experiments. Both observations were the result of the anomalous compressibility of γ-phase of Ce. On the basis of our experimental results the compression isentrope of Ce γ-phase was constructed. Its comparison with volumetric compression curves allowed to estimate the magnitude of shear stress at dynamic compression conditions for Ce. Spall strength measurements were also conducted for several samples. They showed a strong dependence of the spall strength of Ce on the strain rate.

  15. Iterative dictionary construction for compression of large DNA data sets.

    Science.gov (United States)

    Kuruppu, Shanika; Beresford-Smith, Bryan; Conway, Thomas; Zobel, Justin

    2012-01-01

    Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, COMRAD, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. COMRAD compresses the data over multiple passes, which is an expensive process, but allows COMRAD to compress large data sets within reasonable time and space. COMRAD allows for random access to individual sequences and subsequences without decompressing the whole data set. COMRAD has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.

  16. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    Directory of Open Access Journals (Sweden)

    Herzke Tobias

    2005-01-01

    Full Text Available The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase

  17. Real-time lossless data compression techniques for long-pulse operation

    International Nuclear Information System (INIS)

    Jesus Vega, J.; Sanchez, E.; Portas, A.; Pereira, A.; Ruiz, M.

    2006-01-01

    Data logging and data distribution will be two main tasks connected with data handling in ITER. Data logging refers to the recovery and ultimate storage of all data, independent on the data source. Control data and physics data distribution is related, on the one hand, to the on-line data broadcasting for immediate data availability for both data analysis and data visualization. On the other hand, delayed analyses require off-line data access. Due to the large data volume expected, data compression will be mandatory in order to save storage and bandwidth. On-line data distribution in a long pulse environment requires the use of a deterministic approach to be able to ensure a proper response time for data availability. However, an essential feature for all the above purposes is to apply compression techniques that ensure the recovery of the initial signals without spectral distortion when compacted data are expanded (lossless techniques). Delta compression methods are independent on the analogue characteristics of waveforms and there exist a variety of implementations that have been applied to the databases of several fusion devices such as Alcator, JET and TJ-II among others. Delta compression techniques are carried out in a two step algorithm. The first step consists of a delta calculation, i.e. the computation of the differences between the digital codes of adjacent signal samples. The resultant deltas are then encoded according to constant- or variable-length bit allocation. Several encoding forms can be considered for the second step and they have to satisfy a prefix code property. However, and in order to meet the requirement of on-line data distribution, the encoding forms have to be defined prior to data capture. This article reviews different lossless data compression techniques based on delta compression. In addition, the concept of cyclic delta transformation is introduced. Furthermore, comparative results concerning compression rates on different

  18. Research into material behaviour of the polymeric samples obtained after 3D-printing and subjected to compression test

    Science.gov (United States)

    Petrov, Mikhail A.; Kosatchyov, Nikolay V.; Petrov, Pavel A.

    2016-10-01

    The paper represents the results of the study concerning the investigation of the influence of the filling grade (material density) on the force characteristic during the uniaxial compression test of the cylindrical polymer probes produced by additive technology based on FDM. The authors have shown that increasing of the filling grate follows to the increase of the deformation forces. However, the dependency is not a linear function and characterized by soft-elastic model of material behaviour, which is typical for polymers partly crystallized structure.

  19. NIR hyperspectral compressive imager based on a modified Fabry–Perot resonator

    Science.gov (United States)

    Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Stern, Adrian

    2018-04-01

    The acquisition of hyperspectral (HS) image datacubes with available 2D sensor arrays involves a time consuming scanning process. In the last decade, several compressive sensing (CS) techniques were proposed to reduce the HS acquisition time. In this paper, we present a method for near-infrared (NIR) HS imaging which relies on our rapid CS resonator spectroscopy technique. Within the framework of CS, and by using a modified Fabry–Perot resonator, a sequence of spectrally modulated images is used to recover NIR HS datacubes. Owing to the innovative CS design, we demonstrate the ability to reconstruct NIR HS images with hundreds of spectral bands from an order of magnitude fewer measurements, i.e. with a compression ratio of about 10:1. This high compression ratio, together with the high optical throughput of the system, facilitates fast acquisition of large HS datacubes.

  20. Compressing a spinodal surface at fixed area: bijels in a centrifuge.

    Science.gov (United States)

    Rumble, Katherine A; Thijssen, Job H J; Schofield, Andrew B; Clegg, Paul S

    2016-05-11

    Bicontinuous interfacially jammed emulsion gels (bijels) are solid-stabilised emulsions with two inter-penetrating continuous phases. Employing the method of centrifugal compression we find that macroscopically the bijel yields at relatively low angular acceleration. Both continuous phases escape from the top of the structure, making any compression immediately irreversible. Microscopically, the bijel becomes anisotropic with the domains aligned perpendicular to the compression direction which inhibits further liquid expulsion; this contrasts strongly with the sedimentation behaviour of colloidal gels. The original structure can, however, be preserved close to the top of the sample and thus the change to an anisotropic structure suggests internal yielding. Any air bubbles trapped in the bijel are found to aid compression by forming channels aligned parallel to the compression direction which provide a route for liquid to escape.

  1. Multiple Description Coding with Feedback Based Network Compression

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Østergaard, Jan; Popovski, Petar

    2010-01-01

    and an intermediate node, respectively. A trade-off exists between reducing the delay of the feedback by adapting in the vicinity of the receiver and increasing the gain from compression by adapting close to the source. The analysis shows that adaptation in the network provides a better trade-off than adaptation...

  2. Observation of a New High-Pressure Solid Phase in Dynamically Compressed Aluminum

    Science.gov (United States)

    Polsin, D. N.

    2017-10-01

    Aluminum is ideal for testing theoretical first-principles calculations because of the relative simplicity of its atomic structure. Density functional theory (DFT) calculations predict that Al transforms from an ambient-pressure, face-centered-cubic (fcc) crystal to the hexagonal close-packed (hcp) and body-centered-cubic (bcc) structures as it is compressed. Laser-driven experiments performed at the University of Rochester's Laboratory for Laser Energetics and the National Ignition Facility (NIF) ramp compressed Al samples to pressures up to 540 GPa without melting. Nanosecond in-situ x-ray diffraction was used to directly measure the crystal structure at pressures where the solid-solid phase transformations of Al are predicted to occur. Laser velocimetry provided the pressure in the Al. Our results show clear evidence of the fcc-hcp and hpc-bcc transformations at 216 +/- 9 GPa and 321 +/- 12 GPa, respectively. This is the first experimental in-situ observation of the bcc phase in compressed Al and a confirmation of the fcc-hcp transition previously observed under static compression at 217 GPa. The observations indicate these solid-solid phase transitions occur on the order of tens of nanoseconds time scales. In the fcc-hcp transition we find the original texture of the sample is preserved; however, the hcp-bcc transition diminishes that texture producing a structure that is more polycrystalline. The importance of this dynamic is discussed. The NIF results are the first demonstration of x-ray diffraction measurements at two different pressures in a single laser shot. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  3. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  4. Strength properties and structure of a submicrocrystalline Al-Mg-Mn alloy under shock compression

    Science.gov (United States)

    Petrova, A. N.; Brodova, I. G.; Razorenov, S. V.

    2017-06-01

    The results of studying the strength of a submicrocrystalline aluminum A5083 alloy (chemical composition was 4.4Mg-0.6Mn-0.11Si-0.23Fe-0.03Cr-0.02Cu-0.06Ti wt % and Al base) under shockwave compression are presented. The submicrocrystalline structure of the alloy was produced in the process of dynamic channel-angular pressing at a strain rate of 104 s-1. The average size of crystallites in the alloy was 180-460 nm. Hugoniot elastic limit σHEL, dynamic yield stress σy, and the spall strength σSP of the submicrocrystalline alloy were determined based on the free-surface velocity profiles of samples during shock compression. It has been established that upon shock compression, the σHEL and σy of the submicrocrystalline alloy are higher than those of the coarse-grained alloy and σsp does not depend on the grain size. The maximum value of σHEL reached for the submicrocrystalline alloy is 0.66 GPa, which is greater than that in the coarse-crystalline alloy by 78%. The dynamic yield stress is σy = 0.31 GPa, which is higher than that of the coarse-crystalline alloy by 63%. The spall strength is σsp = 1.49 GPa. The evolution of the submicrocrystalline structure of the alloy during shock compression was studied. It has been established that a mixed nonequilibrium grain-subgrain structure with a fragment size of about 400 nm is retained after shock compression, and the dislocation density and the hardness of the alloy are increased.

  5. Compressions of electrorheological fluids under different initial gap distances.

    Science.gov (United States)

    Tian, Yu; Wen, Shizhu; Meng, Yonggang

    2003-05-01

    Compressions of electrorheological (ER) fluids have been carried out under different initial gap distances and different applied voltages. The nominal yield stresses of the compressed ER fluid under different conditions, according to the mechanics of compressing continuous fluids considering the yield stress of the plastic fluid, have been calculated. Curves of nominal yield stress under different applied voltages at an initial gap distance of 4 mm overlapped well and were shown to be proportional to the square of the external electric field and agree well with the traditional description. With the decrease of the initial gap distance, the difference between the nominal yield stress curves increased. The gap distance effect on the compression of ER fluids could not be explained by the traditional description based on the Bingham model and the continuous media theory. An explanation based on the mechanics of particle chain is proposed to describe the gap distance effect on the compression of ER fluids.

  6. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    International Nuclear Information System (INIS)

    Braunschweig, R.; Kaden, Ingmar; Schwarzer, J.; Sprengel, C.; Klose, K.

    2009-01-01

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  7. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    Energy Technology Data Exchange (ETDEWEB)

    Braunschweig, R.; Kaden, Ingmar [Klinik fuer Bildgebende Diagnostik und Interventionsradiologie, BG-Kliniken Bergmannstrost Halle (Germany); Schwarzer, J.; Sprengel, C. [Dept. of Management Information System and Operations Research, Martin-Luther-Univ. Halle Wittenberg (Germany); Klose, K. [Medizinisches Zentrum fuer Radiologie, Philips-Univ. Marburg (Germany)

    2009-07-15

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  8. Signal compression in radar using FPGA

    OpenAIRE

    Escamilla Hemández, Enrique; Kravchenko, Víctor; Ponomaryov, Volodymyr; Duchen Sánchez, Gonzalo; Hernández Sánchez, David

    2010-01-01

    We present the hardware implementation of radar real time processing procedures using a simple, fast technique based on FPGA (Field Programmable Gate Array) architecture. This processing includes different window procedures during pulse compression in synthetic aperture radar (SAR). The radar signal compression processing is realized using matched filter, and classical and novel window functions, where we focus on better solution for minimum values of sidelobes. The proposed architecture expl...

  9. Influence of Fly Ash on the Compressive Strength of Foamed Concrete at Elevated Temperature

    Directory of Open Access Journals (Sweden)

    Ahmad H.

    2014-01-01

    Full Text Available Foamed concrete is a lightweight concrete that is widely used in the construction industry recently. This study was carried out to investigate the influence of fly ash as a cement replacement material to the residual compressive strength of foamed concrete subjected to elevated temperature. For this study, the foamed concrete density was fixed at 1300 kg/m3 and the sand-cement ratio and water-cement was set at 1:2 and 0.45, respectively. The samples were prepared and tested at the age of 28 days. Based on the results, it has been found that with 25% inclusion of fly ash, the percentage of compressive strength loss was decreased by 3 – 50%.

  10. Irreversible data compression concepts with polynomial fitting in time-order of particle trajectory for visualization of huge particle system

    International Nuclear Information System (INIS)

    Ohtani, H; Ito, A M; Hagita, K; Kato, T; Saitoh, T; Takeda, T

    2013-01-01

    We propose in this paper a data compression scheme for large-scale particle simulations, which has favorable prospects for scientific visualization of particle systems. Our data compression concepts deal with the data of particle orbits obtained by simulation directly and have the following features: (i) Through control over the compression scheme, the difference between the simulation variables and the reconstructed values for the visualization from the compressed data becomes smaller than a given constant. (ii) The particles in the simulation are regarded as independent particles and the time-series data for each particle is compressed with an independent time-step for the particle. (iii) A particle trajectory is approximated by a polynomial function based on the characteristic motion of the particle. It is reconstructed as a continuous curve through interpolation from the values of the function for intermediate values of the sample data. We name this concept ''TOKI (Time-Order Kinetic Irreversible compression)''. In this paper, we present an example of an implementation of a data-compression scheme with the above features. Several application results are shown for plasma and galaxy formation simulation data

  11. Compression and fast retrieval of SNP data.

    Science.gov (United States)

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. The effects of fluorine on the compressibility of chondrodite structure

    Science.gov (United States)

    KURIBAYASHI, T.; KUDOH, Y.; KAGI, H.

    2001-12-01

    High-pressure single crystal X-ray diffraction study on OH-chondrodite (synthesis) and natural chondrodite were performed using a diamond anvil cell with graphite monochromatized MoKα radiation (50kV,40 mA,λ =0.71069Å) up to 4.1 GPa for OH-chondrodite and 7.2 GPa for Natural sample at room temperature. Chemical formula of these samples are Mg4.99Si2.01O8(OH)1.97 and Mg4.76Fe0.22Ti_{0.02}Si_{1.99}O_{8}(OH_{1.24},F_{0.76}), respectively. OH-chondrodite was synthesized at 6 GPa, 900\\degree condition and natural chondrodite was from Tilley Foster Mine, U.S.A.. In high-pressure experiments, 4:1 methanol-ethanol fluid was used as pressure medium and pressure was determined by ruby fluorescence method (Piermarnini, 1974). Unit cell parameters at each pressure were determined using 20-25 centered reflections within 11.5\\deg$1.5σ (Io)) were obtained by averaging the equivalent intensities in Laue group 2/m. The isothermal bulk modulus of each sample, determined using Birch-Murnaghan equation of state, is 110(10) GPa (assuming K'=4) for OH-sample and 118(2) GPa (K'=4.3 (8)) for Tilley sample. These values are in good agreement with 115 GPa (K'=4.9(2)) for OH-chondrodite (Ross and Crichton, 2001) and 118 GPa for F-bearing chondrodite (Sinogeikin and Bass, 1999). Also, the linear compressibility of each sample is β a=1.89(5),β b=3.18(4),β c=2.89(8) x10-3/GPa for OH-sample and β a=1.72(5),β b=2.99(4),β c=2.77(2) x10-3/GPa for Natural sample, respectively. F-bearing chondrodite is slightly less compressible than OH-chondrodite. The most compressible axis is b-axis (10 Å length period) corresponded to b-axis of olivine (Pbnm). The anisotropy of compressibility of natural sample is the same trend as those (β b>β c>β a) of Kuribayashi et al. (1998) and Ross and Crichton (2001).

  13. The impact of mineral composition on compressibility of saturated soils

    OpenAIRE

    Dolinar, Bojana

    2012-01-01

    This article analyses the impact of soils` mineral composition on their compressibility. Physical and chemical properties of minerals which influence the quantity of intergrain water in soils and, consequently, the compressibility of soils are established by considering the previous theoretical findings. Test results obtained on artificially prepared samples are used to determine the analytical relationship between the water content and stress state, depending on the mineralogical properties ...

  14. Patterns of neurovascular compression in patients with classic trigeminal neuralgia: A high-resolution MRI-based study

    International Nuclear Information System (INIS)

    Lorenzoni, José; David, Philippe; Levivier, Marc

    2012-01-01

    Purpose: To describe the anatomical characteristics and patterns of neurovascular compression in patients suffering classic trigeminal neuralgia (CTN), using high-resolution magnetic resonance imaging (MRI). Materials and methods: The analysis of the anatomy of the trigeminal nerve, brain stem and the vascular structures related to this nerve was made in 100 consecutive patients treated with a Gamma Knife radiosurgery for CTN between December 1999 and September 2004. MRI studies (T1, T1 enhanced and T2-SPIR) with axial, coronal and sagital simultaneous visualization were dynamically assessed using the software GammaPlan™. Three-dimensional reconstructions were also developed in some representative cases. Results: In 93 patients (93%), there were one or several vascular structures in contact, either, with the trigeminal nerve, or close to its origin in the pons. The superior cerebellar artery was involved in 71 cases (76%). Other vessels identified were the antero-inferior cerebellar artery, the basilar artery, the vertebral artery, and some venous structures. Vascular compression was found anywhere along the trigeminal nerve. The mean distance between the nerve compression and the origin of the nerve in the brainstem was 3.76 ± 2.9 mm (range 0–9.8 mm). In 39 patients (42%), the vascular compression was located proximally and in 42 (45%) the compression was located distally. Nerve dislocation or distortion by the vessel was observed in 30 cases (32%). Conclusions: The findings of this study are similar to those reported in surgical and autopsy series. This non-invasive MRI-based approach could be useful for diagnostic and therapeutic decisions in CTN, and it could help to understand its pathogenesis.

  15. Warm Water Compress as an Alternative for Decreasing the Degree of Phlebitis.

    Science.gov (United States)

    Annisa, Fitri; Nurhaeni, Nani; Wanda, Dessie

    Intravenous fluid therapy is an invasive procedure which may increase the risk of patient complications. One of the most common of these is phlebitis, which may cause discomfort and tissue damage. Therefore, a nursing intervention is needed to effectively treat phlebitis. The purpose of this study was to investigate the effectiveness of applying a warm compression intervention to reduce the degree of phlebitis. A quasi-experimental pre-test and post-test design was used, with a non-equivalent control group. The total sample size was 32 patients with degrees of phlebitis ranging from 1 to 4. The total sample was divided into 2 interventional groups: those patients that were given 0.9% NaCl compresses and those given warm water compresses. The results showed that both compresses were effective in reducing the degree of phlebitis, with similar p values (p = .000). However, there was no difference in the average reduction score between the two groups (p = .18). Therefore, a warm water compress is valuable in the treatment of phlebitis, and could decrease the degree of phlebitis both effectively and inexpensively.

  16. Parallel Algorithm for Wireless Data Compression and Encryption

    Directory of Open Access Journals (Sweden)

    Qin Jiancheng

    2017-01-01

    Full Text Available As the wireless network has limited bandwidth and insecure shared media, the data compression and encryption are very useful for the broadcasting transportation of big data in IoT (Internet of Things. However, the traditional techniques of compression and encryption are neither competent nor efficient. In order to solve this problem, this paper presents a combined parallel algorithm named “CZ algorithm” which can compress and encrypt the big data efficiently. CZ algorithm uses a parallel pipeline, mixes the coding of compression and encryption, and supports the data window up to 1 TB (or larger. Moreover, CZ algorithm can encrypt the big data as a chaotic cryptosystem which will not decrease the compression speed. Meanwhile, a shareware named “ComZip” is developed based on CZ algorithm. The experiment results show that ComZip in 64 b system can get better compression ratio than WinRAR and 7-zip, and it can be faster than 7-zip in the big data compression. In addition, ComZip encrypts the big data without extra consumption of computing resources.

  17. Image-Based Compression Method of Three-Dimensional Range Data with Texture

    OpenAIRE

    Chen, Xia; Bell, Tyler; Zhang, Song

    2017-01-01

    Recently, high speed and high accuracy three-dimensional (3D) scanning techniques and commercially available 3D scanning devices have made real-time 3D shape measurement and reconstruction possible. The conventional mesh representation of 3D geometry, however, results in large file sizes, causing difficulties for its storage and transmission. Methods for compressing scanned 3D data therefore become desired. This paper proposes a novel compression method which stores 3D range data within the c...

  18. Single exposure optically compressed imaging and visualization using random aperture coding

    Energy Technology Data Exchange (ETDEWEB)

    Stern, A [Electro Optical Unit, Ben Gurion University of the Negev, Beer-Sheva 84105 (Israel); Rivenson, Yair [Department of Electrical and Computer Engineering, Ben Gurion University of the Negev, Beer-Sheva 84105 (Israel); Javidi, Bahrain [Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut 06269-1157 (United States)], E-mail: stern@bgu.ac.il

    2008-11-01

    The common approach in digital imaging follows the sample-then-compress framework. According to this approach, in the first step as many pixels as possible are captured and in the second step the captured image is compressed by digital means. The recently introduced theory of compressed sensing provides the mathematical foundation necessary to combine these two steps in a single one, that is, to compress the information optically before it is recorded. In this paper we overview and extend an optical implementation of compressed sensing theory that we have recently proposed. With this new imaging approach the compression is accomplished inherently in the optical acquisition step. The primary feature of this imaging approach is a randomly encoded aperture realized by means of a random phase screen. The randomly encoded aperture implements random projection of the object field in the image plane. Using a single exposure, a randomly encoded image is captured which can be decoded by proper decoding algorithm.

  19. Interactive computer graphics applications for compressible aerodynamics

    Science.gov (United States)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  20. Object specific reconstruction using compressively sensed data

    International Nuclear Information System (INIS)

    Mahalanobis, Abhijit

    2008-01-01

    Compressed sensing holds the promise for radically novel sensors that can perfectly reconstruct images using considerably less samples of data than required by the otherwise general Shannon sampling theorem. In surveillance systems however, it is also desirable to cue regions of the image where objects of interest may exist. Thus in this paper, we are interested in imaging interesting objects in a scene, without necessarily seeking perfect reconstruction of the whole image. We show that our goals are achieved by minimizing a modified L2-norm criterion with good results when the reconstruction of only specific objects is of interest. The method yields a simple closed form analytical solution that does not require iterative processing. Objects can be meaningfully sensed in considerable detail while heavily compressing the scene elsewhere. Essentially, this embeds the object detection and clutter discrimination function in the sensing and imaging process.

  1. Shock absorbing properties of toroidal shells under compression, 3

    International Nuclear Information System (INIS)

    Sugita, Yuji

    1985-01-01

    The author has previously presented the static load-deflection relations of a toroidal shell subjected to axisymmetric compression between rigid plates and those of its outer half when subjected to lateral compression. In both these cases, the analytical method was based on the incremental Rayleigh-Ritz method. In this paper, the effects of compression angle and strain rate on the load-deflection relations of the toroidal shell are investigated for its use as a shock absorber for the radioactive material shipping cask which must keep its structural integrity even after accidental falls at any angle. Static compression tests have been carried out at four angles of compression, 10 0 , 20 0 , 50 0 , 90 0 and the applications of the preceding analytical method have been discussed. Dynamic compression tests have also been performed using the free-falling drop hammer. The results are compared with those in the static compression tests. (author)

  2. Experimental scheme and restoration algorithm of block compression sensing

    Science.gov (United States)

    Zhang, Linxia; Zhou, Qun; Ke, Jun

    2018-01-01

    Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.

  3. Ultrafine grained Cu processed by compression with oscillatory torsion

    OpenAIRE

    K. Rodak

    2007-01-01

    Purpose: The aim of this work is a study of Cu microstructure after severe plastic deformation process by usingcompression with oscillatory torsion test.Design/methodology/approach: Cu samples were deformed at torsion frequency (f) changed from 0 Hz(compression) to 1.8 Hz under a constant torsion angle (α) ≈8° and compression speed (v)=0.1mm/s. Structuralinvestigations were conducted by using light microscopy (LM) and transmission electron microscopy (TEM).Findings: The structural analysis ma...

  4. Terminology: resistance or stiffness for medical compression stockings?

    Directory of Open Access Journals (Sweden)

    André Cornu-Thenard

    2013-04-01

    Full Text Available Based on previous experimental work with medical compression stockings it is proposed to restrict the term stiffness to measurements on the human leg and rather to speak about resistance when it comes to characterize the elastic property of compression hosiery in the textile laboratory.

  5. Efficient algorithms of multidimensional γ-ray spectra compression

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2006-01-01

    The efficient algorithms to compress multidimensional γ-ray events are presented. Two alternative kinds of compression algorithms based on both the adaptive orthogonal and randomizing transforms are proposed. In both algorithms we employ the reduction of data volume due to the symmetry of the γ-ray spectra

  6. Compressive Detection Using Sub-Nyquist Radars for Sparse Signals

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2016-01-01

    Full Text Available This paper investigates the compression detection problem using sub-Nyquist radars, which is well suited to the scenario of high bandwidths in real-time processing because it would significantly reduce the computational burden and save power consumption and computation time. A compressive generalized likelihood ratio test (GLRT detector for sparse signals is proposed for sub-Nyquist radars without ever reconstructing the signal involved. The performance of the compressive GLRT detector is analyzed and the theoretical bounds are presented. The compressive GLRT detection performance of sub-Nyquist radars is also compared to the traditional GLRT detection performance of conventional radars, which employ traditional analog-to-digital conversion (ADC at Nyquist sampling rates. Simulation results demonstrate that the former can perform almost as well as the latter with a very small fraction of the number of measurements required by traditional detection in relatively high signal-to-noise ratio (SNR cases.

  7. ERGC: an efficient referential genome compression algorithm.

    Science.gov (United States)

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Effect of Pelletized Coconut Fibre on the Compressive Strength of Foamed Concrete

    Directory of Open Access Journals (Sweden)

    Mohd Jaini Zainorizuan

    2016-01-01

    Full Text Available Foamed concrete is a controlled low density ranging from 400kg/m3 to 1800kg/m3, and hence suitable for the construction of buildings and infrastructures. The uniqueness of foamed concrete is does not use aggregates in order to retain low density. Foamed concrete contains only cement, sand, water and foam agent. Therefore, the consumption of cement is higher in producing a good quality and strength of foamed concrete. Without the present of aggregates, the compressive strength of foamed concrete can only achieve as high as 15MPa. Therefore, this study aims to introduce the pelletized coconut fibre aggregate to reduce the consumption of cement but able to enhance the compressive strength. In the experimental study, forty-five (45 cube samples of foamed concrete with density 1600kg/m3 were prepared with different volume fractions of pelletized coconut fibre aggregate. All cube samples were tested using the compression test to obtain compressive strength. The results showed that the compressive strength of foamed concrete containing 5%, 10%, 15% and 20% of pelletized coconut fibre aggregate are 9.6MPa, 11.4MPa, 14.6MPa and 13.4MPa respectively. It is in fact higher than the controlled foamed concrete that only achieves 9MPa. It is found that the pelletized coconut fibre aggregate indicates a good potential to enhance the compressive strength of foamed concrete.

  9. Compressing Aviation Data in XML Format

    Science.gov (United States)

    Patel, Hemil; Lau, Derek; Kulkarni, Deepak

    2003-01-01

    Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.

  10. Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

    International Nuclear Information System (INIS)

    Takeda, Koujin; Kabashima, Yoshiyuki

    2013-01-01

    We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach

  11. Estimates of post-acceleration longitudinal bunch compression

    International Nuclear Information System (INIS)

    Judd, D.L.

    1977-01-01

    A simple analytic method is developed, based on physical approximations, for treating transient implosive longitudinal compression of bunches of heavy ions in an accelerator system for ignition of inertial-confinement fusion pellet targets. Parametric dependences of attainable compressions and of beam path lengths and times during compression are indicated for ramped pulsed-gap lines, rf systems in storage and accumulator rings, and composite systems, including sections of free drift. It appears that for high-confidence pellets in a plant producing 1000 MW of electric power the needed pulse lengths cannot be obtained with rings alone unless an unreasonably large number of them are used, independent of choice of rf harmonic number. In contrast, pulsed-gap lines alone can meet this need. The effects of an initial inward compressive drift and of longitudinal emittance are included

  12. Real-time lossless compression of depth streams

    KAUST Repository

    Schneider, Jens

    2017-08-17

    Various examples are provided for lossless compression of data streams. In one example, a Z-lossless (ZLS) compression method includes generating compacted depth information by condensing information of a depth image and a compressed binary representation of the depth image using histogram compaction and decorrelating the compacted depth information to produce bitplane slicing of residuals by spatial prediction. In another example, an apparatus includes imaging circuitry that can capture one or more depth images and processing circuitry that can generate compacted depth information by condensing information of a captured depth image and a compressed binary representation of the captured depth image using histogram compaction; decorrelate the compacted depth information to produce bitplane slicing of residuals by spatial prediction; and generate an output stream based upon the bitplane slicing.

  13. Real-time lossless compression of depth streams

    KAUST Repository

    Schneider, Jens

    2017-01-01

    Various examples are provided for lossless compression of data streams. In one example, a Z-lossless (ZLS) compression method includes generating compacted depth information by condensing information of a depth image and a compressed binary representation of the depth image using histogram compaction and decorrelating the compacted depth information to produce bitplane slicing of residuals by spatial prediction. In another example, an apparatus includes imaging circuitry that can capture one or more depth images and processing circuitry that can generate compacted depth information by condensing information of a captured depth image and a compressed binary representation of the captured depth image using histogram compaction; decorrelate the compacted depth information to produce bitplane slicing of residuals by spatial prediction; and generate an output stream based upon the bitplane slicing.

  14. Insertion profiles of 4 headless compression screws.

    Science.gov (United States)

    Hart, Adam; Harvey, Edward J; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A

    2013-09-01

    In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. The peak compression occurs at an insertion depth of -3.1 mm, -2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of -2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of -2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws, and enable the surgeon to optimize compression. Copyright

  15. Modelling and analysis of a novel compressed air energy storage system for trigeneration based on electrical energy peak load shifting

    International Nuclear Information System (INIS)

    Lv, Song; He, Wei; Zhang, Aifeng; Li, Guiqiang; Luo, Bingqing; Liu, Xianghua

    2017-01-01

    Highlights: • A new CAES system for trigeneration based on electrical peak load shifting is proposed. • The theoretical models and the thermodynamics process are established and analyzed. • The relevant parameters influencing its performance have been discussed and optimized. • A novel energy and economic evaluation methods is proposed to evaluate the performance of the system. - Abstract: The compressed air energy storage (CAES) has made great contribution to both electricity and renewable energy. In the pursuit of reduced energy consumption and relieving power utility pressure effectively, a novel trigeneration system based on CAES for cooling, heating and electricity generation by electrical energy peak load shifting is proposed in this paper. The cooling power is generated by the direct expansion of compressed air, and the heating power is recovered in the process of compression and storage. Based on the working principle of the typical CAES, the theoretical analysis of the thermodynamic system models are established and the characteristics of the system are analyzed. A novel method used to evaluate energy and economic performance is proposed. A case study is conducted, and the economic-social and technical feasibility of the proposed system are discussed. The results show that the trigeneration system works efficiently at relatively low pressure, and the efficiency is expected to reach about 76.3% when air is compressed and released by 15 bar. The annual monetary cost saving annually is about 53.9%. Moreover, general considerations about the proposed system are also presented.

  16. Large Eddy Simulation for Compressible Flows

    CERN Document Server

    Garnier, E; Sagaut, P

    2009-01-01

    Large Eddy Simulation (LES) of compressible flows is still a widely unexplored area of research. The authors, whose books are considered the most relevant monographs in this field, provide the reader with a comprehensive state-of-the-art presentation of the available LES theory and application. This book is a sequel to "Large Eddy Simulation for Incompressible Flows", as most of the research on LES for compressible flows is based on variable density extensions of models, methods and paradigms that were developed within the incompressible flow framework. The book addresses both the fundamentals and the practical industrial applications of LES in order to point out gaps in the theoretical framework as well as to bridge the gap between LES research and the growing need to use it in engineering modeling. After introducing the fundamentals on compressible turbulence and the LES governing equations, the mathematical framework for the filtering paradigm of LES for compressible flow equations is established. Instead ...

  17. Compression and archiving of digital images

    International Nuclear Information System (INIS)

    Huang, H.K.

    1988-01-01

    This paper describes the application of a full-frame bit-allocation image compression technique to a hierarchical digital image archiving system consisting of magnetic disks, optical disks and an optical disk library. The digital archiving system without the compression has been in clinical operation in the Pediatric Radiology for more than half a year. The database in the system consists of all pediatric inpatients including all images from computed radiography, digitized x-ray films, CT, MR, and US. The rate of image accumulation is approximately 1,900 megabytes per week. The hardware design of the compression module is based on a Motorola 68020 microprocessor, A VME bus, a 16 megabyte image buffer memory board, and three Motorola digital signal processing 56001 chips on a VME board for performing the two-dimensional cosine transform and the quantization. The clinical evaluation of the compression module with the image archiving system is expected to be in February 1988

  18. A Study on the Data Compression Technology-Based Intelligent Data Acquisition (IDAQ System for Structural Health Monitoring of Civil Structures

    Directory of Open Access Journals (Sweden)

    Gwanghee Heo

    2017-07-01

    Full Text Available In this paper, a data compression technology-based intelligent data acquisition (IDAQ system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform. The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF manner. In addition, the embedded software technology (EST has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size.

  19. Simultaneous heating and compression of irradiated graphite during synchrotron microtomographic imaging

    Science.gov (United States)

    Bodey, A. J.; Mileeva, Z.; Lowe, T.; Williamson-Brown, E.; Eastwood, D. S.; Simpson, C.; Titarenko, V.; Jones, A. N.; Rau, C.; Mummery, P. M.

    2017-06-01

    Nuclear graphite is used as a neutron moderator in fission power stations. To investigate the microstructural changes that occur during such use, it has been studied for the first time by X-ray microtomography with in situ heating and compression. This experiment was the first to involve simultaneous heating and mechanical loading of radioactive samples at Diamond Light Source, and represented the first study of radioactive materials at the Diamond-Manchester Imaging Branchline I13-2. Engineering methods and safety protocols were developed to ensure the safe containment of irradiated graphite as it was simultaneously compressed to 450N in a Deben 10kN Open-Frame Rig and heated to 300°C with dual focused infrared lamps. Central to safe containment was a double containment vessel which prevented escape of airborne particulates while enabling compression via a moveable ram and the transmission of infrared light to the sample. Temperature measurements were made in situ via thermocouple readout. During heating and compression, samples were simultaneously rotated and imaged with polychromatic X-rays. The resulting microtomograms are being studied via digital volume correlation to provide insights into how thermal expansion coefficients and microstructure are affected by irradiation history, load and heat. Such information will be key to improving the accuracy of graphite degradation models which inform safety margins at power stations.

  20. Silicon based ultrafast optical waveform sampling

    DEFF Research Database (Denmark)

    Ji, Hua; Galili, Michael; Pu, Minhao

    2010-01-01

    A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode-locker as th......A 300 nmx450 nmx5 mm silicon nanowire is designed and fabricated for a four wave mixing based non-linear optical gate. Based on this silicon nanowire, an ultra-fast optical sampling system is successfully demonstrated using a free-running fiber laser with a carbon nanotube-based mode......-locker as the sampling source. A clear eye-diagram of a 320 Gbit/s data signal is obtained. The temporal resolution of the sampling system is estimated to 360 fs....

  1. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment.

    Science.gov (United States)

    Ferragina, Paolo; Giancarlo, Raffaele; Greco, Valentina; Manzini, Giovanni; Valiente, Gabriel

    2007-07-13

    Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity), NCD (Normalized Compression Dissimilarity) and CD (Compression Dissimilarity). Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis, aims at

  3. Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment

    Directory of Open Access Journals (Sweden)

    Manzini Giovanni

    2007-07-01

    Full Text Available Abstract Background Similarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity, NCD (Normalized Compression Dissimilarity and CD (Compression Dissimilarity. Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available. Results We experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC

  4. Wavelet-Based Watermarking and Compression for ECG Signals with Verification Evaluation

    Directory of Open Access Journals (Sweden)

    Kuo-Kun Tseng

    2014-02-01

    Full Text Available In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user’s data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER, signal-to-noise ratio (SNR, compression ratio (CR, and compressed-signal to noise ratio (CNR methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible.

  5. Thermal characteristics of highly compressed bentonite

    International Nuclear Information System (INIS)

    Sueoka, Tooru; Kobayashi, Atsushi; Imamura, S.; Ogawa, Terushige; Murata, Shigemi.

    1990-01-01

    In the disposal of high level radioactive wastes in strata, it is planned to protect the canisters enclosing wastes with buffer materials such as overpacks and clay, therefore, the examination of artificial barrier materials is an important problem. The concept of the disposal in strata and the soil mechanics characteristics of highly compressed bentonite as an artificial barrier material were already reported. In this study, the basic experiment on the thermal characteristics of highly compressed bentonite was carried out, therefore, it is reported. The thermal conductivity of buffer materials is important because the possibility that it determines the temperature of solidified bodies and canisters is high, and the buffer materials may cause the thermal degeneration due to high temperature. Thermophysical properties are roughly divided into thermodynamic property, transport property and optical property. The basic principle of measured thermal conductivity and thermal diffusivity, the kinds of the measuring method and so on are explained. As for the measurement of the thermal conductivity of highly compressed bentonite, the experimental setup, the procedure, samples and the results are reported. (K.I.)

  6. On Scientific Data and Image Compression Based on Adaptive Higher-Order FEM

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Andrš, David

    2009-01-01

    Roč. 1, č. 1 (2009), s. 56-68 ISSN 2070-0733 R&D Projects: GA ČR(CZ) GA102/07/0496; GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z20570509 Keywords : data compress ion * image compress ion * adaptive hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering http://www.global-sci.org/aamm

  7. Analyzing the errors of DFT approximations for compressed water systems

    International Nuclear Information System (INIS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-01-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm 3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE h ≃ 15 meV/monomer for the liquid and the

  8. Micro-compression testing: A critical discussion of experimental constraints

    International Nuclear Information System (INIS)

    Kiener, D.; Motz, C.; Dehm, G.

    2009-01-01

    Micro-compression testing is a promising technique for determining mechanical properties at small length scales since it has several benefits over nanoindentation. However, as for all new techniques, experimental constraints influencing the results of such a micro-mechanical test must be considered. Here we investigate constraints imposed by the sample geometry, the pile-up of dislocations at the sample top and base, and the lateral stiffness of the testing setup. Using a focused ion beam milling setup, single crystal Cu specimens with different geometries and crystal orientations were fabricated. Tapered samples served to investigate the influence of strain gradients, while stiff sample top coatings and undeformable substrates depict the influence of dislocation pile-ups at these interfaces. The lateral system stiffness was reduced by placing specimens on top of needles. Samples were loaded using an in situ indenter in a scanning electron microscope in load controlled or displacement controlled mode. The observed differences in the mechanical response with respect to the experimental imposed constraints are discussed and lead to the conclusion that controlling the lateral system stiffness is the most important point

  9. Compression force and radiation dose in the Norwegian Breast Cancer Screening Program

    Energy Technology Data Exchange (ETDEWEB)

    Waade, Gunvor G.; Sanderud, Audun [Department of Life Sciences and Health, Faculty of Health Sciences, Oslo and Akershus University College of Applied Sciences, P.O. 4 St. Olavs Plass, 0130 Oslo (Norway); Hofvind, Solveig, E-mail: solveig.hofvind@kreftregisteret.no [Department of Life Sciences and Health, Faculty of Health Sciences, Oslo and Akershus University College of Applied Sciences, P.O. 4 St. Olavs Plass, 0130 Oslo (Norway); The Cancer Registry of Norway, P.O. 5313 Majorstuen, 0304 Oslo (Norway)

    2017-03-15

    Highlights: • Compression force and radiation dose for 17 951 screening mammograms were analyzed. • Large variations in mean applied compression force between the breast centers. • Limited associations between compression force and radiation dose. - Abstract: Purpose: Compression force is used in mammography to reduce breast thickness and by that decrease radiation dose and improve image quality. There are no evidence-based recommendations regarding the optimal compression force. We analyzed compression force and radiation dose between screening centers in the Norwegian Breast Cancer Screening Program (NBCSP), as a first step towards establishing evidence-based recommendations for compression force. Materials and methods: The study included information from 17 951 randomly selected screening examinations among women screened with equipment from four different venors at fourteen breast centers in the NBCSP, January-March 2014. We analyzed the applied compression force and radiation dose used on craniocaudal (CC) and mediolateral-oblique (MLO) view on left breast, by breast centers and vendors. Results: Mean compression force used in the screening program was 116N (CC: 108N, MLO: 125N). The maximum difference in mean compression force between the centers was 63N for CC and 57N for MLO. Mean radiation dose for each image was 1.09 mGy (CC: 1.04mGy, MLO: 1.14mGy), varying from 0.55 mGy to 1.31 mGy between the centers. Compression force alone had a negligible impact on radiation dose (r{sup 2} = 0.8%, p = < 0.001). Conclusion: We observed substantial variations in mean compression forces between the breast centers. Breast characteristics and differences in automated exposure control between vendors might explain the low association between compression force and radiation dose. Further knowledge about different automated exposure controls and the impact of compression force on dose and image quality is needed to establish individualised and evidence-based

  10. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  11. Compressed multi-block local binary pattern for object tracking

    Science.gov (United States)

    Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao

    2018-04-01

    Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.

  12. Medical image compression and its application to TDIS-FILE equipment

    International Nuclear Information System (INIS)

    Tsubura, Shin-ichi; Nishihara, Eitaro; Iwai, Shunsuke

    1990-01-01

    In order to compress medical images for filing and communication, we have developed a compression algorithm which compresses images with remarkable quality using a high-pass filtering method. Hardware for this compression algorithm was also developed and applied to TDIS (total digital imaging system)-FILE equipment. In the future, hardware based on this algorithm will be developed for various types of diagnostic equipment and PACS. This technique has the following characteristics: (1) significant reduction of artifacts; (2) acceptable quality for clinical evaluation at 15:1 to 20:1 compression ratio; and (3) high-speed processing and compact hardware. (author)

  13. Adaptive learning compressive tracking based on Markov location prediction

    Science.gov (United States)

    Zhou, Xingyu; Fu, Dongmei; Yang, Tao; Shi, Yanan

    2017-03-01

    Object tracking is an interdisciplinary research topic in image processing, pattern recognition, and computer vision which has theoretical and practical application value in video surveillance, virtual reality, and automatic navigation. Compressive tracking (CT) has many advantages, such as efficiency and accuracy. However, when there are object occlusion, abrupt motion and blur, similar objects, and scale changing, the CT has the problem of tracking drift. We propose the Markov object location prediction to get the initial position of the object. Then CT is used to locate the object accurately, and the classifier parameter adaptive updating strategy is given based on the confidence map. At the same time according to the object location, extract the scale features, which is able to deal with object scale variations effectively. Experimental results show that the proposed algorithm has better tracking accuracy and robustness than current advanced algorithms and achieves real-time performance.

  14. Homogeneous Charge Compression Ignition Combustion of Dimethyl Ether

    DEFF Research Database (Denmark)

    Pedersen, Troels Dyhr

    This thesis is based on experimental and numerical studies on the use of dimethyl ether (DME) in the homogeneous charge compression ignition (HCCI) combustion process. The first paper in this thesis was published in 2007 and describes HCCI combustion of pure DME in a small diesel engine. The tests...... were designed to investigate the effect of engine speed, compression ratio and equivalence ratio on the combustion timing and the engine performance. It was found that the required compression ratio depended on the equivalence ratio used. A lower equivalence ratio requires a higher compression ratio...... before the fuel is burned completely, due to lower in-cylinder temperatures and lower reaction rates. The study provided some insight in the importance of operating at the correct compression ratio, as well as the operational limitations and emission characteristics of HCCI combustion. HCCI combustion...

  15. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  16. Natrium: Use of FPGA embedded processors for real-time data compression

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R; Salamon, A; Salina, G [INFN Sezione di Roma Tor Vergata, Rome (Italy); Biagioni, A; Frezza, O; Cicero, F Lo; Lonardo, A; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P [INFN Sezione di Roma, Rome (Italy)

    2011-12-15

    We present test results and characterization of a data compression system for the readout of the NA62 liquid krypton calorimeter trigger processor. The Level-0 electromagnetic calorimeter trigger processor of the NA62 experiment at CERN receives digitized data from the calorimeter main readout board. These data are stored on an on-board DDR2 RAM memory and read out upon reception of a Level-0 accept signal. The maximum raw data throughput from the trigger front-end cards is 2.6 Gbps. To readout these data over two Gbit Ethernet interfaces we investigated different implementations of a data compression system based on the Rice-Golomb coding: one is implemented in the FPGA as a custom block and one is implemented on the FPGA embedded processor running a C code. The two implementations are tested on a set of sample events and compared with respect to achievable readout bandwidth.

  17. Natrium: Use of FPGA embedded processors for real-time data compression

    International Nuclear Information System (INIS)

    Ammendola, R; Salamon, A; Salina, G; Biagioni, A; Frezza, O; Cicero, F Lo; Lonardo, A; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P

    2011-01-01

    We present test results and characterization of a data compression system for the readout of the NA62 liquid krypton calorimeter trigger processor. The Level-0 electromagnetic calorimeter trigger processor of the NA62 experiment at CERN receives digitized data from the calorimeter main readout board. These data are stored on an on-board DDR2 RAM memory and read out upon reception of a Level-0 accept signal. The maximum raw data throughput from the trigger front-end cards is 2.6 Gbps. To readout these data over two Gbit Ethernet interfaces we investigated different implementations of a data compression system based on the Rice-Golomb coding: one is implemented in the FPGA as a custom block and one is implemented on the FPGA embedded processor running a C code. The two implementations are tested on a set of sample events and compared with respect to achievable readout bandwidth.

  18. Effect of JPEG2000 mammogram compression on microcalcifications segmentation

    International Nuclear Information System (INIS)

    Georgiev, V.; Arikidis, N.; Karahaliou, A.; Skiadopoulos, S.; Costaridou, L.

    2012-01-01

    The purpose of this study is to investigate the effect of mammographic image compression on the automated segmentation of individual microcalcifications. The dataset consisted of individual microcalcifications of 105 clusters originating from mammograms of the Digital Database for Screening Mammography. A JPEG2000 wavelet-based compression algorithm was used for compressing mammograms at 7 compression ratios (CRs): 10:1, 20:1, 30:1, 40:1, 50:1, 70:1 and 100:1. A gradient-based active contours segmentation algorithm was employed for segmentation of microcalcifications as depicted on original and compressed mammograms. The performance of the microcalcification segmentation algorithm on original and compressed mammograms was evaluated by means of the area overlap measure (AOM) and distance differentiation metrics (d mean and d max ) by comparing automatically derived microcalcification borders to manually defined ones by an expert radiologist. The AOM monotonically decreased as CR increased, while d mean and d max metrics monotonically increased with CR increase. The performance of the segmentation algorithm on original mammograms was (mean±standard deviation): AOM=0.91±0.08, d mean =0.06±0.05 and d max =0.45±0.20, while on 40:1 compressed images the algorithm's performance was: AOM=0.69±0.15, d mean =0.23±0.13 and d max =0.92±0.39. Mammographic image compression deteriorates the performance of the segmentation algorithm, influencing the quantification of individual microcalcification morphological properties and subsequently affecting computer aided diagnosis of microcalcification clusters. (authors)

  19. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  20. Study of the stress-strain state of compressed concrete elements with composite reinforcement

    Directory of Open Access Journals (Sweden)

    Bondarenko Yurii

    2017-01-01

    Full Text Available The efficiency analysis of the application of glass composite reinforcement in compressed concrete elements as a load-carrying component has been performed. The results of experimental studies of the deformation-strength characteristics of this reinforcement on compression and compressed concrete cylinders reinforced by this reinforcement are presented. The results of tests and mechanisms of sample destruction have been analyzed. The numerical analysis of the stress-strain state has been performed for axial compression of concrete elements with glasscomposite reinforcement. The influence of the reinforcement percentage on the stressed state of a concrete compressed element with the noted reinforcement is estimated. On the basis of the obtained results, it is established that the glass-composite reinforcement has positive effect on the strength of the compressed concrete elements. That is, when calculating the load-bearing capacity of such structures, the function of composite reinforcement on compression should not be neglected.

  1. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest.

    Science.gov (United States)

    Monsieurs, Koenraad G; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F; Calle, Paul A

    2012-11-01

    BACKGROUND AND GOAL OF STUDY: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with decreased depth. In patients undergoing prehospital cardiopulmonary resuscitation by health care professionals, chest compression rate and depth were recorded using an accelerometer (E-series monitor-defibrillator, Zoll, U.S.A.). Compression depth was compared for rates 120/min. A difference in compression depth ≥0.5 cm was considered clinically significant. Mixed models with repeated measurements of chest compression depth and rate (level 1) nested within patients (level 2) were used with compression rate as a continuous and as a categorical predictor of depth. Results are reported as means and standard error (SE). One hundred and thirty-three consecutive patients were analysed (213,409 compressions). Of all compressions 2% were 120/min, 36% were 5 cm. In 77 out of 133 (58%) patients a statistically significant lower depth was observed for rates >120/min compared to rates 80-120/min, in 40 out of 133 (30%) this difference was also clinically significant. The mixed models predicted that the deepest compression (4.5 cm) occurred at a rate of 86/min, with progressively lower compression depths at higher rates. Rates >145/min would result in a depth compression depth for rates 80-120/min was on average 4.5 cm (SE 0.06) compared to 4.1 cm (SE 0.06) for compressions >120/min (mean difference 0.4 cm, Pcompression rates and lower compression depths. Avoiding excessive compression rates may lead to more compressions of sufficient depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Visualization of big SPH simulations via compressed octree grids

    KAUST Repository

    Reichl, Florian

    2013-10-01

    Interactive and high-quality visualization of spatially continuous 3D fields represented by scattered distributions of billions of particles is challenging. One common approach is to resample the quantities carried by the particles to a regular grid and to render the grid via volume ray-casting. In large-scale applications such as astrophysics, however, the required grid resolution can easily exceed 10K samples per spatial dimension, letting resampling approaches appear unfeasible. In this paper we demonstrate that even in these extreme cases such approaches perform surprisingly well, both in terms of memory requirement and rendering performance. We resample the particle data to a multiresolution multiblock grid, where the resolution of the blocks is dictated by the particle distribution. From this structure we build an octree grid, and we then compress each block in the hierarchy at no visual loss using wavelet-based compression. Since decompression can be performed on the GPU, it can be integrated effectively into GPU-based out-of-core volume ray-casting. We compare our approach to the perspective grid approach which resamples at run-time into a view-aligned grid. We demonstrate considerably faster rendering times at high quality, at only a moderate memory increase compared to the raw particle set. © 2013 IEEE.

  3. MANU. Isostatic compression of buffer blocks. Small scale

    International Nuclear Information System (INIS)

    Laaksonen, R.

    2010-01-01

    The purpose of this study was to become familiar with the isostatic compression technique and to manufacture specimens to study various aspects of the manufacturing process. These included for example the effect of moisture, maximum compressive pressure, vibration, vacuum, specimen size, coating, multiple compressions and duration of load cycle on the density and other properties of bentonite specimens. Also the amount of volumetric contraction was of interest in this study together with the used mould technology. This work summarizes the tests done with isostatic compression technique during 2008. Tests were mainly carried out with MX-80 bentonite, which is a commercial product and currently the reference bentonite in the repository reference plan. Tests were made from June to November 2008 both in Finland and in Sweden. VTT made four test series in Finland. MABU Consulting Ab made two test series in Sweden. Also Posiva Oy carried out one preliminary series before this study in Finland. The test results show that there is a clear relationship between density and moisture content at all pressure levels. The calculated degree of saturation of more moist samples remained at the level of 95 -to 98 % of full saturation. It should be possible to manufacture buffer blocks with high accuracy (density, water content, degree of saturation), if similar preliminary tests are done. Tests did not support the assumption that vacuum (partial or full) in the specimen during compression increases the final density. Tests showed that pre-vibrated specimens had a slightly higher density but the difference was insignificant. Coarse raw bentonite produced the highest dry density of all sodium bentonites used. The highest dry density values were received with Minelco's Ca-bentonite, but the average water content was not extremely accurate. The following recommendations were derived from the results of this project: additional tests should be carried out to determine the relationship

  4. Prediction of peak back compressive forces as a function of lifting speed and compressive forces at lift origin and destination - a pilot study.

    Science.gov (United States)

    Greenland, Kasey O; Merryweather, Andrew S; Bloswick, Donald S

    2011-09-01

    To determine the feasibility of predicting static and dynamic peak back-compressive forces based on (1) static back compressive force values at the lift origin and destination and (2) lifting speed. Ten male subjects performed symmetric mid-sagittal floor-to-shoulder, floor-to-waist, and waist-to-shoulder lifts at three different speeds (slow, medium, and fast), and with two different loads (light and heavy). Two-dimensional kinematics and kinetics were captured. Linear regression analyses were used to develop prediction equations, the amount of predictability, and significance for static and dynamic peak back-compressive forces based on a static origin and destination average (SODA) back-compressive force. Static and dynamic peak back-compressive forces were highly predicted by the SODA, with R(2) values ranging from 0.830 to 0.947. Slopes were significantly different between slow and fast lifting speeds (p assessments at the origin and destination of a lifting task. This could be valuable for enhancing job design and analysis in the workplace and for large-scale studies where a full analysis of each lifting task is not feasible.

  5. Apparent stress-strain relationships in experimental equipment where magnetorheological fluids operate under compression mode

    International Nuclear Information System (INIS)

    Mazlan, S A; Ekreem, N B; Olabi, A G

    2008-01-01

    This paper presents an experimental investigation of two different magnetorheological (MR) fluids, namely, water-based and hydrocarbon-based MR fluids in compression mode under various applied currents. Finite element method magnetics was used to predict the magnetic field distribution inside the MR fluids generated by a coil. A test rig was constructed where the MR fluid was sandwiched between two flat surfaces. During the compression, the upper surface was moved towards the lower surface in a vertical direction. Stress-strain relationships were obtained for arrangements of equipment where each type of fluid was involved, using compression test equipment. The apparent compressive stress was found to be increased with the increase in magnetic field strength. In addition, the apparent compressive stress of the water-based MR fluid showed a response to the compressive strain of greater magnitude. However, during the compression process, the hydrocarbon-based MR fluid appeared to show a unique behaviour where an abrupt pressure drop was discovered in a region where the apparent compressive stress would be expected to increase steadily. The conclusion is drawn that the apparent compressive stress of MR fluids is influenced strongly by the nature of the carrier fluid and by the magnitude of the applied current

  6. Medical Image Compression Based on Region of Interest, With Application to Colon CT Images

    National Research Council Canada - National Science Library

    Gokturk, Salih

    2001-01-01

    ...., in diagnostically important regions. This paper discusses a hybrid model of lossless compression in the region of interest, with high-rate, motion-compensated, lossy compression in other regions...

  7. Activated carbon from thermo-compressed wood and other lignocellulosic precursors

    Directory of Open Access Journals (Sweden)

    Capart, R.

    2007-05-01

    Full Text Available The effects of thermo-compression on the physical properties such as bulk density, mass yield, surface area, and also adsorption capacity of activated carbon were studied. The activated carbon samples were prepared from thermo-compressed and virgin fir-wood by two methods, a physical activation with CO2 and a chemical activation with KOH. A preliminary thermo-compression method seems an easy way to confer to a tender wood a bulk density almost three times larger than its initial density. Thermo-compression increased yield regardless of the mode of activation. The physical activation caused structural alteration, which enhanced the enlargement of micropores and even their degradation, leading to the formation of mesopores. Chemical activation conferred to activated carbon a heterogeneous and exclusively microporous nature. Moreover, when coupled to chemical activation, thermo-compression resulted in a satisfactory yield (23%, a high surface area (>1700 m2.g-1, and a good adsorption capacity for two model pollutants in aqueous solution: methylene blue and phenol. Activated carbon prepared from thermo-compressed wood exhibited a higher adsorption capacity for both the pollutants than did a commercial activated carbon.

  8. Compressive strength of dental composites photo-activated with different light tips

    International Nuclear Information System (INIS)

    Galvão, M R; Campos, E A; Rastelli, A N S; Andrade, M F; Caldas, S G F R; Calabrez-Filho, S; Bagnato, V S

    2013-01-01

    The aim of this study was to evaluate the compressive strength of microhybrid (Filtek™ Z250) and nanofilled (Filtek™ Supreme XT) composite resins photo-activated with two different light guide tips, fiber optic and polymer, coupled with one LED. The power density was 653 mW cm −2 when using the fiber optic light tip and 596 mW cm −2 with the polymer. After storage in distilled water at 37 ± 2 °C for seven days, the samples were subjected to mechanical testing of compressive strength in an EMIC universal mechanical testing machine with a load cell of 5 kN and speed of 0.5 mm min −1 . The statistical analysis was performed using ANOVA with a confidence interval of 95% and Tamhane’s test. The results showed that the mean values of compressive strength were not influenced by the different light tips (p > 0.05). However, a statistical difference was observed (p < 0.001) between the microhybrid composite resin photo-activated with the fiber optic light tip and the nanofilled composite resin. Based on these results, it can be concluded that microhybrid composite resin photo-activated with the fiber optic light tip showed better results than nanofilled, regardless of the tip used, and the type of the light tip did not influence the compressive strength of either composite. Thus, the presented results suggest that both the fiber optic and polymer light guide tips provide adequate compressive strength to be used to make restorations. However, the fiber optic light tip associated with microhybrid composite resin may be an interesting option for restorations mainly in posterior teeth. (paper)

  9. Composition-Structure-Property Relations of Compressed Borosilicate Glasses

    Science.gov (United States)

    Svenson, Mouritz N.; Bechgaard, Tobias K.; Fuglsang, Søren D.; Pedersen, Rune H.; Tjell, Anders Ø.; Østergaard, Martin B.; Youngman, Randall E.; Mauro, John C.; Rzoska, Sylwester J.; Bockowski, Michal; Smedskjaer, Morten M.

    2014-08-01

    Hot isostatic compression is an interesting method for modifying the structure and properties of bulk inorganic glasses. However, the structural and topological origins of the pressure-induced changes in macroscopic properties are not yet well understood. In this study, we report on the pressure and composition dependences of density and micromechanical properties (hardness, crack resistance, and brittleness) of five soda-lime borosilicate glasses with constant modifier content, covering the extremes from Na-Ca borate to Na-Ca silicate end members. Compression experiments are performed at pressures ≤1.0 GPa at the glass transition temperature in order to allow processing of large samples with relevance for industrial applications. In line with previous reports, we find an increasing fraction of tetrahedral boron, density, and hardness but a decreasing crack resistance and brittleness upon isostatic compression. Interestingly, a strong linear correlation between plastic (irreversible) compressibility and initial trigonal boron content is demonstrated, as the trigonal boron units are the ones most disposed for structural and topological rearrangements upon network compaction. A linear correlation is also found between plastic compressibility and the relative change in hardness with pressure, which could indicate that the overall network densification is responsible for the increase in hardness. Finally, we find that the micromechanical properties exhibit significantly different composition dependences before and after pressurization. The findings have important implications for tailoring microscopic and macroscopic structures of glassy materials and thus their properties through the hot isostatic compression method.

  10. Relationship between pore structure and compressive strength of ...

    Indian Academy of Sciences (India)

    J BU

    compressive strength relationship in ... He applied this equation to experimental data on gypsum plasters and ... Popovics [15] observes that this is true even for different types of ... proportions and curing ages of concrete samples are listed in table 1.

  11. Compressed sensing in imaging mass spectrometry

    International Nuclear Information System (INIS)

    Bartels, Andreas; Dülk, Patrick; Trede, Dennis; Alexandrov, Theodore; Maaß, Peter

    2013-01-01

    Imaging mass spectrometry (IMS) is a technique of analytical chemistry for spatially resolved, label-free and multipurpose analysis of biological samples that is able to detect the spatial distribution of hundreds of molecules in one experiment. The hyperspectral IMS data is typically generated by a mass spectrometer analyzing the surface of the sample. In this paper, we propose a compressed sensing approach to IMS which potentially allows for faster data acquisition by collecting only a part of the pixels in the hyperspectral image and reconstructing the full image from this data. We present an integrative approach to perform both peak-picking spectra and denoising m/z-images simultaneously, whereas the state of the art data analysis methods solve these problems separately. We provide a proof of the robustness of the recovery of both the spectra and individual channels of the hyperspectral image and propose an algorithm to solve our optimization problem which is based on proximal mappings. The paper concludes with the numerical reconstruction results for an IMS dataset of a rat brain coronal section. (paper)

  12. Adiabatic compression and radiative compression of magnetic fields

    International Nuclear Information System (INIS)

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  13. Temperature and moisture content effects on compressive strength parallel to the grain of paricá

    Directory of Open Access Journals (Sweden)

    Manuel Jesús Manríquez Figueroa

    Full Text Available The aim of this study is to evaluate the effect of the temperature and moisture content on the compressive strength parallel to the grain of paricá (Schizolobium amazonicum Huber ex. Ducke from cultivated forests. The experiments were carried out on 3 timber samples under different conditions: heated (HT, thermal treatment (TT and water saturated (WS. The HT sample consisted of 105 clear specimens assembled in 15 groups, the TT consisted of 90 clear specimens assembled in 15 groups and the WS consisted of 90 clear specimens assembled in 9 groups. The specimens from HT and WS samples were tested at a temperature range from 20 to 230 ºC and 20 to 100 ºC, respectively. The HT specimens were tested at ambient temperature, but after being submitted to thermal treatment. The HT, TT and WS samples present a decrease in the compressive strength, reaching 65%, 76% and 59% of the compressive strength at room temperature, respectively. The decrease in the compressive strength of the HT and WS samples can be associated to the thermal degradation of wood polymers and the moisture content. For the TT sample, the strength increased for a pre-heating temperature of up to 170 °C due to the reduction in the moisture content of the specimens.

  14. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  15. Control volume based modelling of compressible flow in reciprocating machines

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik

    2004-01-01

    , and multidimensional effects must be calculated using empirical correlations; correlations for steady state flow can be used as an approximation. A transformation that assumes ideal gas is presented for transforming equations for masses and energies in control volumes into the corresponding pressures and temperatures......An approach to modelling unsteady compressible flow that is primarily one dimensional is presented. The approach was developed for creating distributed models of machines with reciprocating pistons but it is not limited to this application. The approach is based on the integral form of the unsteady...... conservation laws for mass, energy, and momentum applied to a staggered mesh consisting of two overlapping strings of control volumes. Loss mechanisms can be included directly in the governing equations of models by including them as terms in the conservation laws. Heat transfer, flow friction...

  16. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  17. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  18. Time-lens based optical packet pulse compression and retiming

    DEFF Research Database (Denmark)

    Laguardia Areal, Janaina; Hu, Hao; Palushani, Evarist

    2010-01-01

    recovery, resulting in a potentially very efficient solution. The scheme uses a time-lens, implemented through a sinusoidally driven optical phase modulation, combined with a linear dispersion element. As time-lenses are also used for pulse compression, we design the circuit also to perform pulse...

  19. Acceptance test procedure for core sample trucks

    International Nuclear Information System (INIS)

    Smalley, J.L.

    1995-01-01

    The purpose of this Acceptance Test Procedure is to provide instruction and documentation for acceptance testing of the rotary mode core sample trucks, HO-68K-4600 and HO-68K-4647. The rotary mode core sample trucks were based upon the design of the second core sample truck (HO-68K-4345) which was constructed to implement rotary mode sampling of the waste tanks at Hanford. Acceptance testing of the rotary mode core sample trucks will verify that the design requirements have been met. All testing will be non-radioactive and stand-in materials shall be used to simulate waste tank conditions. Compressed air will be substituted for nitrogen during the majority of testing, with nitrogen being used only for flow characterization

  20. A cost-effective compressed air generation for manufacturing using modified microturbines

    International Nuclear Information System (INIS)

    Eret, Petr

    2016-01-01

    Highlights: • A new cost-effective way of compressed air generation for manufacturing in SME is proposed. • The approach is based on a modified microturbine configuration. • Thermodynamic and life cycle analyses are presented and economic benefit is demonstrated. - Abstract: Compressed air is an irreplaceable energy source for some manufacturing processes, and is also common in applications even when there are alternatives. As a result, compressed air is a key utility in manufacturing industry, but unfortunately the cost of compressed air production is one of the most expensive processes in a manufacturing facility. In order to reduce the compressed air generation cost an unconventional way using a microturbine configuration is proposed. The concept is based on an extraction of a certain amount of compressed air from/after the compressor with the residual air flowing to the turbine to produce sufficient back power to drive the compressor. A thermodynamic and life cycle analysis are presented for several system variations, including a simple cycle without a recuperator and a complex configuration with an intercooler, recuperator and reheating. The study is based on the typical requirements (i.e. quantity, pressure) for a small to medium sized industrial compressed air system. The analysis is focused on the North American market due to the low price of natural gas. The lowest life cycle cost alternative is represented by a microturbine concept with a recuperator, air extraction after partial compression, intercooler and aftercooler. A comparison of an electric motor and conventional microturbine prime movers demonstrates the economic benefit of the proposed compressed air generation method, for the design parameters and utility prices considered.